2023-07-16 19:14:58,976 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436 2023-07-16 19:14:58,998 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-16 19:14:59,021 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-16 19:14:59,022 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/cluster_3dc61365-b8a9-88a9-a1c3-d2e9b69c46bf, deleteOnExit=true 2023-07-16 19:14:59,022 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-16 19:14:59,023 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/test.cache.data in system properties and HBase conf 2023-07-16 19:14:59,023 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/hadoop.tmp.dir in system properties and HBase conf 2023-07-16 19:14:59,024 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/hadoop.log.dir in system properties and HBase conf 2023-07-16 19:14:59,025 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-16 19:14:59,025 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-16 19:14:59,025 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-16 19:14:59,155 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-16 19:14:59,687 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-16 19:14:59,692 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-16 19:14:59,693 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-16 19:14:59,693 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-16 19:14:59,693 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 19:14:59,694 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-16 19:14:59,694 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-16 19:14:59,694 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 19:14:59,694 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 19:14:59,695 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-16 19:14:59,695 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/nfs.dump.dir in system properties and HBase conf 2023-07-16 19:14:59,696 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/java.io.tmpdir in system properties and HBase conf 2023-07-16 19:14:59,696 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 19:14:59,696 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-16 19:14:59,696 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-16 19:15:00,282 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 19:15:00,286 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 19:15:00,617 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-16 19:15:00,822 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-16 19:15:00,842 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 19:15:00,896 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 19:15:00,934 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/java.io.tmpdir/Jetty_localhost_33643_hdfs____.c2y16p/webapp 2023-07-16 19:15:01,085 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33643 2023-07-16 19:15:01,097 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 19:15:01,098 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 19:15:01,708 WARN [Listener at localhost/34211] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 19:15:01,815 WARN [Listener at localhost/34211] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 19:15:01,842 WARN [Listener at localhost/34211] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 19:15:01,851 INFO [Listener at localhost/34211] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 19:15:01,913 INFO [Listener at localhost/34211] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/java.io.tmpdir/Jetty_localhost_39833_datanode____3jz2vt/webapp 2023-07-16 19:15:02,039 INFO [Listener at localhost/34211] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39833 2023-07-16 19:15:02,459 WARN [Listener at localhost/42127] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 19:15:02,556 WARN [Listener at localhost/42127] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 19:15:02,560 WARN [Listener at localhost/42127] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 19:15:02,562 INFO [Listener at localhost/42127] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 19:15:02,573 INFO [Listener at localhost/42127] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/java.io.tmpdir/Jetty_localhost_40827_datanode____juh4a0/webapp 2023-07-16 19:15:02,727 INFO [Listener at localhost/42127] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40827 2023-07-16 19:15:02,743 WARN [Listener at localhost/33713] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 19:15:02,793 WARN [Listener at localhost/33713] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 19:15:02,798 WARN [Listener at localhost/33713] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 19:15:02,801 INFO [Listener at localhost/33713] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 19:15:02,810 INFO [Listener at localhost/33713] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/java.io.tmpdir/Jetty_localhost_44357_datanode____.llwt8k/webapp 2023-07-16 19:15:02,927 INFO [Listener at localhost/33713] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44357 2023-07-16 19:15:02,944 WARN [Listener at localhost/36799] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 19:15:03,080 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4219623e0ead5d4d: Processing first storage report for DS-3640afe9-07c0-497c-bcb3-0444937e269b from datanode a9d0d1bb-5569-4212-81f2-2aa371553760 2023-07-16 19:15:03,082 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4219623e0ead5d4d: from storage DS-3640afe9-07c0-497c-bcb3-0444937e269b node DatanodeRegistration(127.0.0.1:46641, datanodeUuid=a9d0d1bb-5569-4212-81f2-2aa371553760, infoPort=46763, infoSecurePort=0, ipcPort=42127, storageInfo=lv=-57;cid=testClusterID;nsid=911605595;c=1689534900365), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-16 19:15:03,082 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x52ecb95d33f348f8: Processing first storage report for DS-98a6b168-8469-42ae-9886-ce0d071e64ba from datanode a4ca084f-b382-48e1-bfcf-52c89ddc82dc 2023-07-16 19:15:03,082 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x52ecb95d33f348f8: from storage DS-98a6b168-8469-42ae-9886-ce0d071e64ba node DatanodeRegistration(127.0.0.1:42907, datanodeUuid=a4ca084f-b382-48e1-bfcf-52c89ddc82dc, infoPort=40345, infoSecurePort=0, ipcPort=33713, storageInfo=lv=-57;cid=testClusterID;nsid=911605595;c=1689534900365), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 19:15:03,082 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4219623e0ead5d4d: Processing first storage report for DS-78a79b6a-5906-4806-9a50-2936f1ebb808 from datanode a9d0d1bb-5569-4212-81f2-2aa371553760 2023-07-16 19:15:03,082 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4219623e0ead5d4d: from storage DS-78a79b6a-5906-4806-9a50-2936f1ebb808 node DatanodeRegistration(127.0.0.1:46641, datanodeUuid=a9d0d1bb-5569-4212-81f2-2aa371553760, infoPort=46763, infoSecurePort=0, ipcPort=42127, storageInfo=lv=-57;cid=testClusterID;nsid=911605595;c=1689534900365), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 19:15:03,082 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x52ecb95d33f348f8: Processing first storage report for DS-70c3945b-2999-4be3-90eb-96692db0596f from datanode a4ca084f-b382-48e1-bfcf-52c89ddc82dc 2023-07-16 19:15:03,082 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x52ecb95d33f348f8: from storage DS-70c3945b-2999-4be3-90eb-96692db0596f node DatanodeRegistration(127.0.0.1:42907, datanodeUuid=a4ca084f-b382-48e1-bfcf-52c89ddc82dc, infoPort=40345, infoSecurePort=0, ipcPort=33713, storageInfo=lv=-57;cid=testClusterID;nsid=911605595;c=1689534900365), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 19:15:03,092 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xef76f2e8bb45ff27: Processing first storage report for DS-8978be2c-120e-4bac-9600-36a8b2deef6c from datanode e3d6455e-262f-413c-9782-13699c0782a3 2023-07-16 19:15:03,092 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xef76f2e8bb45ff27: from storage DS-8978be2c-120e-4bac-9600-36a8b2deef6c node DatanodeRegistration(127.0.0.1:45821, datanodeUuid=e3d6455e-262f-413c-9782-13699c0782a3, infoPort=39341, infoSecurePort=0, ipcPort=36799, storageInfo=lv=-57;cid=testClusterID;nsid=911605595;c=1689534900365), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 19:15:03,092 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xef76f2e8bb45ff27: Processing first storage report for DS-11f25df7-53d1-4b16-9e14-30a8e5897111 from datanode e3d6455e-262f-413c-9782-13699c0782a3 2023-07-16 19:15:03,093 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xef76f2e8bb45ff27: from storage DS-11f25df7-53d1-4b16-9e14-30a8e5897111 node DatanodeRegistration(127.0.0.1:45821, datanodeUuid=e3d6455e-262f-413c-9782-13699c0782a3, infoPort=39341, infoSecurePort=0, ipcPort=36799, storageInfo=lv=-57;cid=testClusterID;nsid=911605595;c=1689534900365), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-16 19:15:03,400 DEBUG [Listener at localhost/36799] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436 2023-07-16 19:15:03,491 INFO [Listener at localhost/36799] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/cluster_3dc61365-b8a9-88a9-a1c3-d2e9b69c46bf/zookeeper_0, clientPort=50949, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/cluster_3dc61365-b8a9-88a9-a1c3-d2e9b69c46bf/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/cluster_3dc61365-b8a9-88a9-a1c3-d2e9b69c46bf/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-16 19:15:03,511 INFO [Listener at localhost/36799] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=50949 2023-07-16 19:15:03,524 INFO [Listener at localhost/36799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:03,527 INFO [Listener at localhost/36799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:04,242 INFO [Listener at localhost/36799] util.FSUtils(471): Created version file at hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049 with version=8 2023-07-16 19:15:04,243 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/hbase-staging 2023-07-16 19:15:04,254 DEBUG [Listener at localhost/36799] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-16 19:15:04,255 DEBUG [Listener at localhost/36799] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-16 19:15:04,255 DEBUG [Listener at localhost/36799] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-16 19:15:04,255 DEBUG [Listener at localhost/36799] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-16 19:15:04,661 INFO [Listener at localhost/36799] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-16 19:15:05,240 INFO [Listener at localhost/36799] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 19:15:05,293 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:05,293 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:05,294 INFO [Listener at localhost/36799] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 19:15:05,294 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:05,294 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 19:15:05,451 INFO [Listener at localhost/36799] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 19:15:05,552 DEBUG [Listener at localhost/36799] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-16 19:15:05,647 INFO [Listener at localhost/36799] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38143 2023-07-16 19:15:05,660 INFO [Listener at localhost/36799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:05,662 INFO [Listener at localhost/36799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:05,686 INFO [Listener at localhost/36799] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38143 connecting to ZooKeeper ensemble=127.0.0.1:50949 2023-07-16 19:15:05,768 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:381430x0, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:05,775 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38143-0x1016f8f37ae0000 connected 2023-07-16 19:15:05,807 DEBUG [Listener at localhost/36799] zookeeper.ZKUtil(164): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 19:15:05,808 DEBUG [Listener at localhost/36799] zookeeper.ZKUtil(164): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:05,811 DEBUG [Listener at localhost/36799] zookeeper.ZKUtil(164): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 19:15:05,831 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38143 2023-07-16 19:15:05,831 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38143 2023-07-16 19:15:05,832 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38143 2023-07-16 19:15:05,834 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38143 2023-07-16 19:15:05,834 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38143 2023-07-16 19:15:05,873 INFO [Listener at localhost/36799] log.Log(170): Logging initialized @7588ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-16 19:15:06,041 INFO [Listener at localhost/36799] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 19:15:06,041 INFO [Listener at localhost/36799] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 19:15:06,042 INFO [Listener at localhost/36799] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 19:15:06,044 INFO [Listener at localhost/36799] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-16 19:15:06,044 INFO [Listener at localhost/36799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 19:15:06,044 INFO [Listener at localhost/36799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 19:15:06,048 INFO [Listener at localhost/36799] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 19:15:06,115 INFO [Listener at localhost/36799] http.HttpServer(1146): Jetty bound to port 36849 2023-07-16 19:15:06,117 INFO [Listener at localhost/36799] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:06,156 INFO [Listener at localhost/36799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:06,160 INFO [Listener at localhost/36799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@32d90619{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/hadoop.log.dir/,AVAILABLE} 2023-07-16 19:15:06,161 INFO [Listener at localhost/36799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:06,161 INFO [Listener at localhost/36799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3e41c305{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 19:15:06,244 INFO [Listener at localhost/36799] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 19:15:06,259 INFO [Listener at localhost/36799] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 19:15:06,260 INFO [Listener at localhost/36799] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 19:15:06,263 INFO [Listener at localhost/36799] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 19:15:06,271 INFO [Listener at localhost/36799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:06,308 INFO [Listener at localhost/36799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@27ce108a{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-16 19:15:06,324 INFO [Listener at localhost/36799] server.AbstractConnector(333): Started ServerConnector@93a89c0{HTTP/1.1, (http/1.1)}{0.0.0.0:36849} 2023-07-16 19:15:06,324 INFO [Listener at localhost/36799] server.Server(415): Started @8040ms 2023-07-16 19:15:06,328 INFO [Listener at localhost/36799] master.HMaster(444): hbase.rootdir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049, hbase.cluster.distributed=false 2023-07-16 19:15:06,431 INFO [Listener at localhost/36799] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 19:15:06,432 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:06,432 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:06,432 INFO [Listener at localhost/36799] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 19:15:06,432 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:06,433 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 19:15:06,444 INFO [Listener at localhost/36799] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 19:15:06,451 INFO [Listener at localhost/36799] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46561 2023-07-16 19:15:06,455 INFO [Listener at localhost/36799] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 19:15:06,470 DEBUG [Listener at localhost/36799] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 19:15:06,471 INFO [Listener at localhost/36799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:06,474 INFO [Listener at localhost/36799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:06,476 INFO [Listener at localhost/36799] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46561 connecting to ZooKeeper ensemble=127.0.0.1:50949 2023-07-16 19:15:06,490 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:465610x0, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:06,492 DEBUG [Listener at localhost/36799] zookeeper.ZKUtil(164): regionserver:465610x0, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 19:15:06,497 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46561-0x1016f8f37ae0001 connected 2023-07-16 19:15:06,497 DEBUG [Listener at localhost/36799] zookeeper.ZKUtil(164): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:06,499 DEBUG [Listener at localhost/36799] zookeeper.ZKUtil(164): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 19:15:06,503 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46561 2023-07-16 19:15:06,504 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46561 2023-07-16 19:15:06,510 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46561 2023-07-16 19:15:06,516 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46561 2023-07-16 19:15:06,524 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46561 2023-07-16 19:15:06,529 INFO [Listener at localhost/36799] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 19:15:06,529 INFO [Listener at localhost/36799] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 19:15:06,529 INFO [Listener at localhost/36799] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 19:15:06,531 INFO [Listener at localhost/36799] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 19:15:06,531 INFO [Listener at localhost/36799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 19:15:06,531 INFO [Listener at localhost/36799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 19:15:06,531 INFO [Listener at localhost/36799] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 19:15:06,534 INFO [Listener at localhost/36799] http.HttpServer(1146): Jetty bound to port 39121 2023-07-16 19:15:06,534 INFO [Listener at localhost/36799] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:06,558 INFO [Listener at localhost/36799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:06,559 INFO [Listener at localhost/36799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4784b602{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/hadoop.log.dir/,AVAILABLE} 2023-07-16 19:15:06,559 INFO [Listener at localhost/36799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:06,560 INFO [Listener at localhost/36799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@623f7cf4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 19:15:06,575 INFO [Listener at localhost/36799] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 19:15:06,576 INFO [Listener at localhost/36799] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 19:15:06,577 INFO [Listener at localhost/36799] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 19:15:06,577 INFO [Listener at localhost/36799] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 19:15:06,584 INFO [Listener at localhost/36799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:06,589 INFO [Listener at localhost/36799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1364e664{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:06,590 INFO [Listener at localhost/36799] server.AbstractConnector(333): Started ServerConnector@6fc105c0{HTTP/1.1, (http/1.1)}{0.0.0.0:39121} 2023-07-16 19:15:06,590 INFO [Listener at localhost/36799] server.Server(415): Started @8306ms 2023-07-16 19:15:06,604 INFO [Listener at localhost/36799] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 19:15:06,604 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:06,605 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:06,605 INFO [Listener at localhost/36799] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 19:15:06,606 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:06,606 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 19:15:06,606 INFO [Listener at localhost/36799] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 19:15:06,611 INFO [Listener at localhost/36799] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42201 2023-07-16 19:15:06,612 INFO [Listener at localhost/36799] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 19:15:06,613 DEBUG [Listener at localhost/36799] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 19:15:06,614 INFO [Listener at localhost/36799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:06,616 INFO [Listener at localhost/36799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:06,618 INFO [Listener at localhost/36799] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42201 connecting to ZooKeeper ensemble=127.0.0.1:50949 2023-07-16 19:15:06,622 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:422010x0, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:06,623 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42201-0x1016f8f37ae0002 connected 2023-07-16 19:15:06,623 DEBUG [Listener at localhost/36799] zookeeper.ZKUtil(164): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 19:15:06,624 DEBUG [Listener at localhost/36799] zookeeper.ZKUtil(164): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:06,625 DEBUG [Listener at localhost/36799] zookeeper.ZKUtil(164): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 19:15:06,626 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42201 2023-07-16 19:15:06,626 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42201 2023-07-16 19:15:06,631 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42201 2023-07-16 19:15:06,631 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42201 2023-07-16 19:15:06,634 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42201 2023-07-16 19:15:06,638 INFO [Listener at localhost/36799] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 19:15:06,638 INFO [Listener at localhost/36799] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 19:15:06,638 INFO [Listener at localhost/36799] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 19:15:06,639 INFO [Listener at localhost/36799] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 19:15:06,639 INFO [Listener at localhost/36799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 19:15:06,639 INFO [Listener at localhost/36799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 19:15:06,639 INFO [Listener at localhost/36799] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 19:15:06,640 INFO [Listener at localhost/36799] http.HttpServer(1146): Jetty bound to port 36985 2023-07-16 19:15:06,640 INFO [Listener at localhost/36799] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:06,644 INFO [Listener at localhost/36799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:06,645 INFO [Listener at localhost/36799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1ab64a2f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/hadoop.log.dir/,AVAILABLE} 2023-07-16 19:15:06,645 INFO [Listener at localhost/36799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:06,646 INFO [Listener at localhost/36799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39d35c00{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 19:15:06,659 INFO [Listener at localhost/36799] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 19:15:06,659 INFO [Listener at localhost/36799] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 19:15:06,660 INFO [Listener at localhost/36799] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 19:15:06,660 INFO [Listener at localhost/36799] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 19:15:06,661 INFO [Listener at localhost/36799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:06,662 INFO [Listener at localhost/36799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2d178a1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:06,663 INFO [Listener at localhost/36799] server.AbstractConnector(333): Started ServerConnector@4ccea9bd{HTTP/1.1, (http/1.1)}{0.0.0.0:36985} 2023-07-16 19:15:06,664 INFO [Listener at localhost/36799] server.Server(415): Started @8379ms 2023-07-16 19:15:06,682 INFO [Listener at localhost/36799] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 19:15:06,682 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:06,682 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:06,682 INFO [Listener at localhost/36799] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 19:15:06,683 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:06,683 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 19:15:06,683 INFO [Listener at localhost/36799] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 19:15:06,685 INFO [Listener at localhost/36799] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37881 2023-07-16 19:15:06,686 INFO [Listener at localhost/36799] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 19:15:06,687 DEBUG [Listener at localhost/36799] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 19:15:06,688 INFO [Listener at localhost/36799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:06,690 INFO [Listener at localhost/36799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:06,692 INFO [Listener at localhost/36799] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37881 connecting to ZooKeeper ensemble=127.0.0.1:50949 2023-07-16 19:15:06,697 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:378810x0, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:06,698 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37881-0x1016f8f37ae0003 connected 2023-07-16 19:15:06,698 DEBUG [Listener at localhost/36799] zookeeper.ZKUtil(164): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 19:15:06,699 DEBUG [Listener at localhost/36799] zookeeper.ZKUtil(164): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:06,700 DEBUG [Listener at localhost/36799] zookeeper.ZKUtil(164): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 19:15:06,703 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37881 2023-07-16 19:15:06,703 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37881 2023-07-16 19:15:06,703 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37881 2023-07-16 19:15:06,706 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37881 2023-07-16 19:15:06,707 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37881 2023-07-16 19:15:06,710 INFO [Listener at localhost/36799] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 19:15:06,710 INFO [Listener at localhost/36799] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 19:15:06,710 INFO [Listener at localhost/36799] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 19:15:06,711 INFO [Listener at localhost/36799] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 19:15:06,711 INFO [Listener at localhost/36799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 19:15:06,711 INFO [Listener at localhost/36799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 19:15:06,712 INFO [Listener at localhost/36799] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 19:15:06,713 INFO [Listener at localhost/36799] http.HttpServer(1146): Jetty bound to port 36479 2023-07-16 19:15:06,713 INFO [Listener at localhost/36799] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:06,717 INFO [Listener at localhost/36799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:06,717 INFO [Listener at localhost/36799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5a76cee2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/hadoop.log.dir/,AVAILABLE} 2023-07-16 19:15:06,717 INFO [Listener at localhost/36799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:06,718 INFO [Listener at localhost/36799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5e2294fa{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 19:15:06,727 INFO [Listener at localhost/36799] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 19:15:06,728 INFO [Listener at localhost/36799] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 19:15:06,728 INFO [Listener at localhost/36799] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 19:15:06,729 INFO [Listener at localhost/36799] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 19:15:06,731 INFO [Listener at localhost/36799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:06,731 INFO [Listener at localhost/36799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@36a7cf96{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:06,732 INFO [Listener at localhost/36799] server.AbstractConnector(333): Started ServerConnector@1266d143{HTTP/1.1, (http/1.1)}{0.0.0.0:36479} 2023-07-16 19:15:06,733 INFO [Listener at localhost/36799] server.Server(415): Started @8448ms 2023-07-16 19:15:06,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:06,743 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@5e228df9{HTTP/1.1, (http/1.1)}{0.0.0.0:35181} 2023-07-16 19:15:06,743 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8459ms 2023-07-16 19:15:06,743 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,38143,1689534904450 2023-07-16 19:15:06,755 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 19:15:06,756 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,38143,1689534904450 2023-07-16 19:15:06,778 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 19:15:06,778 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 19:15:06,778 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 19:15:06,779 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:06,779 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 19:15:06,781 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 19:15:06,784 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,38143,1689534904450 from backup master directory 2023-07-16 19:15:06,784 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 19:15:06,788 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,38143,1689534904450 2023-07-16 19:15:06,788 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 19:15:06,789 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 19:15:06,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,38143,1689534904450 2023-07-16 19:15:06,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-16 19:15:06,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-16 19:15:06,920 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/hbase.id with ID: adae0019-4b04-4954-843c-248f454c2bfd 2023-07-16 19:15:06,973 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:06,995 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:07,078 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x1ee633ed to 127.0.0.1:50949 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:07,105 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5ccc551c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:07,131 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:07,134 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-16 19:15:07,155 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-16 19:15:07,156 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-16 19:15:07,158 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-16 19:15:07,166 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-16 19:15:07,168 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:07,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/MasterData/data/master/store-tmp 2023-07-16 19:15:07,255 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:07,255 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 19:15:07,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:07,255 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:07,255 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 19:15:07,256 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:07,256 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:07,256 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 19:15:07,258 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/MasterData/WALs/jenkins-hbase4.apache.org,38143,1689534904450 2023-07-16 19:15:07,285 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38143%2C1689534904450, suffix=, logDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/MasterData/WALs/jenkins-hbase4.apache.org,38143,1689534904450, archiveDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/MasterData/oldWALs, maxLogs=10 2023-07-16 19:15:07,358 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46641,DS-3640afe9-07c0-497c-bcb3-0444937e269b,DISK] 2023-07-16 19:15:07,358 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45821,DS-8978be2c-120e-4bac-9600-36a8b2deef6c,DISK] 2023-07-16 19:15:07,358 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42907,DS-98a6b168-8469-42ae-9886-ce0d071e64ba,DISK] 2023-07-16 19:15:07,368 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-16 19:15:07,448 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/MasterData/WALs/jenkins-hbase4.apache.org,38143,1689534904450/jenkins-hbase4.apache.org%2C38143%2C1689534904450.1689534907296 2023-07-16 19:15:07,450 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45821,DS-8978be2c-120e-4bac-9600-36a8b2deef6c,DISK], DatanodeInfoWithStorage[127.0.0.1:42907,DS-98a6b168-8469-42ae-9886-ce0d071e64ba,DISK], DatanodeInfoWithStorage[127.0.0.1:46641,DS-3640afe9-07c0-497c-bcb3-0444937e269b,DISK]] 2023-07-16 19:15:07,451 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:07,451 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:07,455 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:07,456 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:07,529 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:07,537 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-16 19:15:07,572 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-16 19:15:07,588 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:07,594 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:07,596 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:07,621 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:07,628 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:07,630 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10144675520, jitterRate=-0.05520346760749817}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:07,630 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 19:15:07,632 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-16 19:15:07,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-16 19:15:07,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-16 19:15:07,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-16 19:15:07,674 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-16 19:15:07,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 48 msec 2023-07-16 19:15:07,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-16 19:15:07,758 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-16 19:15:07,765 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-16 19:15:07,777 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-16 19:15:07,786 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-16 19:15:07,792 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-16 19:15:07,796 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:07,797 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-16 19:15:07,797 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-16 19:15:07,814 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-16 19:15:07,820 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:07,820 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:07,820 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:07,820 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:07,820 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:07,821 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,38143,1689534904450, sessionid=0x1016f8f37ae0000, setting cluster-up flag (Was=false) 2023-07-16 19:15:07,840 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:07,846 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-16 19:15:07,848 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38143,1689534904450 2023-07-16 19:15:07,858 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:07,870 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-16 19:15:07,871 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38143,1689534904450 2023-07-16 19:15:07,874 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.hbase-snapshot/.tmp 2023-07-16 19:15:07,937 INFO [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(951): ClusterId : adae0019-4b04-4954-843c-248f454c2bfd 2023-07-16 19:15:07,937 INFO [RS:2;jenkins-hbase4:37881] regionserver.HRegionServer(951): ClusterId : adae0019-4b04-4954-843c-248f454c2bfd 2023-07-16 19:15:07,937 INFO [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(951): ClusterId : adae0019-4b04-4954-843c-248f454c2bfd 2023-07-16 19:15:07,945 DEBUG [RS:1;jenkins-hbase4:42201] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 19:15:07,945 DEBUG [RS:2;jenkins-hbase4:37881] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 19:15:07,945 DEBUG [RS:0;jenkins-hbase4:46561] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 19:15:07,952 DEBUG [RS:2;jenkins-hbase4:37881] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 19:15:07,952 DEBUG [RS:0;jenkins-hbase4:46561] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 19:15:07,952 DEBUG [RS:1;jenkins-hbase4:42201] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 19:15:07,952 DEBUG [RS:0;jenkins-hbase4:46561] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 19:15:07,952 DEBUG [RS:2;jenkins-hbase4:37881] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 19:15:07,953 DEBUG [RS:1;jenkins-hbase4:42201] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 19:15:07,956 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-16 19:15:07,957 DEBUG [RS:0;jenkins-hbase4:46561] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 19:15:07,957 DEBUG [RS:2;jenkins-hbase4:37881] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 19:15:07,959 DEBUG [RS:1;jenkins-hbase4:42201] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 19:15:07,960 DEBUG [RS:0;jenkins-hbase4:46561] zookeeper.ReadOnlyZKClient(139): Connect 0x235f4678 to 127.0.0.1:50949 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:07,960 DEBUG [RS:2;jenkins-hbase4:37881] zookeeper.ReadOnlyZKClient(139): Connect 0x00bf6c8b to 127.0.0.1:50949 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:07,960 DEBUG [RS:1;jenkins-hbase4:42201] zookeeper.ReadOnlyZKClient(139): Connect 0x0ca7625d to 127.0.0.1:50949 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:07,970 DEBUG [RS:2;jenkins-hbase4:37881] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e7882ec, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:07,970 DEBUG [RS:0;jenkins-hbase4:46561] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2d98ac63, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:07,971 DEBUG [RS:1;jenkins-hbase4:42201] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2608db9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:07,971 DEBUG [RS:2;jenkins-hbase4:37881] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@a4707fb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 19:15:07,971 DEBUG [RS:0;jenkins-hbase4:46561] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@29a72d08, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 19:15:07,971 DEBUG [RS:1;jenkins-hbase4:42201] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@65f092d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 19:15:07,973 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-16 19:15:07,975 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38143,1689534904450] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 19:15:07,977 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-16 19:15:07,978 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-16 19:15:08,002 DEBUG [RS:1;jenkins-hbase4:42201] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:42201 2023-07-16 19:15:08,003 DEBUG [RS:0;jenkins-hbase4:46561] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:46561 2023-07-16 19:15:08,004 DEBUG [RS:2;jenkins-hbase4:37881] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:37881 2023-07-16 19:15:08,011 INFO [RS:0;jenkins-hbase4:46561] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 19:15:08,011 INFO [RS:2;jenkins-hbase4:37881] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 19:15:08,012 INFO [RS:2;jenkins-hbase4:37881] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 19:15:08,011 INFO [RS:1;jenkins-hbase4:42201] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 19:15:08,012 INFO [RS:1;jenkins-hbase4:42201] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 19:15:08,012 DEBUG [RS:2;jenkins-hbase4:37881] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 19:15:08,012 INFO [RS:0;jenkins-hbase4:46561] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 19:15:08,012 DEBUG [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 19:15:08,012 DEBUG [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 19:15:08,016 INFO [RS:2;jenkins-hbase4:37881] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38143,1689534904450 with isa=jenkins-hbase4.apache.org/172.31.14.131:37881, startcode=1689534906681 2023-07-16 19:15:08,016 INFO [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38143,1689534904450 with isa=jenkins-hbase4.apache.org/172.31.14.131:46561, startcode=1689534906430 2023-07-16 19:15:08,016 INFO [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38143,1689534904450 with isa=jenkins-hbase4.apache.org/172.31.14.131:42201, startcode=1689534906603 2023-07-16 19:15:08,041 DEBUG [RS:2;jenkins-hbase4:37881] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 19:15:08,041 DEBUG [RS:0;jenkins-hbase4:46561] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 19:15:08,041 DEBUG [RS:1;jenkins-hbase4:42201] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 19:15:08,105 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-16 19:15:08,111 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40667, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 19:15:08,111 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46651, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 19:15:08,111 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56071, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 19:15:08,124 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:08,134 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:08,135 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:08,170 DEBUG [RS:2;jenkins-hbase4:37881] regionserver.HRegionServer(2830): Master is not running yet 2023-07-16 19:15:08,170 DEBUG [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(2830): Master is not running yet 2023-07-16 19:15:08,170 WARN [RS:2;jenkins-hbase4:37881] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-16 19:15:08,170 DEBUG [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(2830): Master is not running yet 2023-07-16 19:15:08,170 WARN [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-16 19:15:08,171 WARN [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-16 19:15:08,173 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 19:15:08,178 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 19:15:08,178 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 19:15:08,179 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 19:15:08,180 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 19:15:08,180 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 19:15:08,180 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 19:15:08,180 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 19:15:08,181 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-16 19:15:08,181 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,181 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 19:15:08,181 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,186 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689534938186 2023-07-16 19:15:08,189 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-16 19:15:08,193 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 19:15:08,194 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-16 19:15:08,195 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-16 19:15:08,196 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:08,208 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-16 19:15:08,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-16 19:15:08,210 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-16 19:15:08,210 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-16 19:15:08,211 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,213 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-16 19:15:08,215 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-16 19:15:08,216 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-16 19:15:08,222 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-16 19:15:08,222 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-16 19:15:08,226 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689534908225,5,FailOnTimeoutGroup] 2023-07-16 19:15:08,226 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689534908226,5,FailOnTimeoutGroup] 2023-07-16 19:15:08,226 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,226 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-16 19:15:08,229 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,229 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,272 INFO [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38143,1689534904450 with isa=jenkins-hbase4.apache.org/172.31.14.131:42201, startcode=1689534906603 2023-07-16 19:15:08,272 INFO [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38143,1689534904450 with isa=jenkins-hbase4.apache.org/172.31.14.131:46561, startcode=1689534906430 2023-07-16 19:15:08,272 INFO [RS:2;jenkins-hbase4:37881] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38143,1689534904450 with isa=jenkins-hbase4.apache.org/172.31.14.131:37881, startcode=1689534906681 2023-07-16 19:15:08,278 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38143] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:08,279 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38143,1689534904450] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 19:15:08,280 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38143,1689534904450] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-16 19:15:08,285 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38143] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:08,285 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38143,1689534904450] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 19:15:08,285 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38143,1689534904450] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-16 19:15:08,286 DEBUG [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049 2023-07-16 19:15:08,287 DEBUG [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34211 2023-07-16 19:15:08,287 DEBUG [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36849 2023-07-16 19:15:08,286 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38143] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:08,287 DEBUG [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049 2023-07-16 19:15:08,288 DEBUG [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34211 2023-07-16 19:15:08,288 DEBUG [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36849 2023-07-16 19:15:08,294 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38143,1689534904450] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 19:15:08,294 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38143,1689534904450] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-16 19:15:08,304 DEBUG [RS:2;jenkins-hbase4:37881] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049 2023-07-16 19:15:08,304 DEBUG [RS:2;jenkins-hbase4:37881] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34211 2023-07-16 19:15:08,305 DEBUG [RS:2;jenkins-hbase4:37881] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36849 2023-07-16 19:15:08,304 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:08,306 DEBUG [RS:1;jenkins-hbase4:42201] zookeeper.ZKUtil(162): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:08,306 WARN [RS:1;jenkins-hbase4:42201] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 19:15:08,306 INFO [RS:1;jenkins-hbase4:42201] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:08,306 DEBUG [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/WALs/jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:08,313 DEBUG [RS:0;jenkins-hbase4:46561] zookeeper.ZKUtil(162): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:08,313 WARN [RS:0;jenkins-hbase4:46561] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 19:15:08,317 DEBUG [RS:2;jenkins-hbase4:37881] zookeeper.ZKUtil(162): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:08,317 WARN [RS:2;jenkins-hbase4:37881] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 19:15:08,317 INFO [RS:0;jenkins-hbase4:46561] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:08,317 INFO [RS:2;jenkins-hbase4:37881] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:08,324 DEBUG [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/WALs/jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:08,324 DEBUG [RS:2;jenkins-hbase4:37881] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/WALs/jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:08,325 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37881,1689534906681] 2023-07-16 19:15:08,325 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46561,1689534906430] 2023-07-16 19:15:08,325 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42201,1689534906603] 2023-07-16 19:15:08,359 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:08,360 DEBUG [RS:2;jenkins-hbase4:37881] zookeeper.ZKUtil(162): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:08,360 DEBUG [RS:0;jenkins-hbase4:46561] zookeeper.ZKUtil(162): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:08,360 DEBUG [RS:1;jenkins-hbase4:42201] zookeeper.ZKUtil(162): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:08,360 DEBUG [RS:2;jenkins-hbase4:37881] zookeeper.ZKUtil(162): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:08,361 DEBUG [RS:0;jenkins-hbase4:46561] zookeeper.ZKUtil(162): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:08,361 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:08,361 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049 2023-07-16 19:15:08,361 DEBUG [RS:2;jenkins-hbase4:37881] zookeeper.ZKUtil(162): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:08,362 DEBUG [RS:1;jenkins-hbase4:42201] zookeeper.ZKUtil(162): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:08,363 DEBUG [RS:0;jenkins-hbase4:46561] zookeeper.ZKUtil(162): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:08,364 DEBUG [RS:1;jenkins-hbase4:42201] zookeeper.ZKUtil(162): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:08,392 DEBUG [RS:2;jenkins-hbase4:37881] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 19:15:08,392 DEBUG [RS:1;jenkins-hbase4:42201] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 19:15:08,392 DEBUG [RS:0;jenkins-hbase4:46561] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 19:15:08,414 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:08,414 INFO [RS:0;jenkins-hbase4:46561] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 19:15:08,414 INFO [RS:1;jenkins-hbase4:42201] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 19:15:08,414 INFO [RS:2;jenkins-hbase4:37881] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 19:15:08,419 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 19:15:08,422 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/info 2023-07-16 19:15:08,422 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 19:15:08,432 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:08,433 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 19:15:08,438 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/rep_barrier 2023-07-16 19:15:08,439 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 19:15:08,443 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:08,444 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 19:15:08,447 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/table 2023-07-16 19:15:08,447 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 19:15:08,448 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:08,456 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740 2023-07-16 19:15:08,458 INFO [RS:1;jenkins-hbase4:42201] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 19:15:08,458 INFO [RS:2;jenkins-hbase4:37881] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 19:15:08,459 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740 2023-07-16 19:15:08,461 INFO [RS:0;jenkins-hbase4:46561] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 19:15:08,464 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 19:15:08,465 INFO [RS:2;jenkins-hbase4:37881] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 19:15:08,465 INFO [RS:1;jenkins-hbase4:42201] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 19:15:08,466 INFO [RS:2;jenkins-hbase4:37881] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,465 INFO [RS:0;jenkins-hbase4:46561] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 19:15:08,466 INFO [RS:1;jenkins-hbase4:42201] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,466 INFO [RS:0;jenkins-hbase4:46561] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,467 INFO [RS:2;jenkins-hbase4:37881] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 19:15:08,467 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 19:15:08,470 INFO [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 19:15:08,474 INFO [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 19:15:08,476 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:08,477 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11384697280, jitterRate=0.06028255820274353}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 19:15:08,477 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 19:15:08,477 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 19:15:08,477 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 19:15:08,477 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 19:15:08,477 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 19:15:08,477 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 19:15:08,478 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 19:15:08,478 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 19:15:08,484 INFO [RS:1;jenkins-hbase4:42201] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,484 INFO [RS:0;jenkins-hbase4:46561] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,485 DEBUG [RS:1;jenkins-hbase4:42201] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,486 DEBUG [RS:0;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,486 DEBUG [RS:1;jenkins-hbase4:42201] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,486 DEBUG [RS:0;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,486 DEBUG [RS:1;jenkins-hbase4:42201] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,486 DEBUG [RS:0;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,486 DEBUG [RS:1;jenkins-hbase4:42201] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,486 DEBUG [RS:0;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,486 DEBUG [RS:1;jenkins-hbase4:42201] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,486 DEBUG [RS:0;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,486 DEBUG [RS:1;jenkins-hbase4:42201] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 19:15:08,486 DEBUG [RS:0;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 19:15:08,487 DEBUG [RS:1;jenkins-hbase4:42201] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,487 DEBUG [RS:0;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,487 DEBUG [RS:1;jenkins-hbase4:42201] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,487 DEBUG [RS:0;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,487 DEBUG [RS:1;jenkins-hbase4:42201] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,487 DEBUG [RS:0;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,487 DEBUG [RS:1;jenkins-hbase4:42201] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,487 DEBUG [RS:0;jenkins-hbase4:46561] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,488 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 19:15:08,488 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-16 19:15:08,492 INFO [RS:0;jenkins-hbase4:46561] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,492 INFO [RS:1;jenkins-hbase4:42201] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,492 INFO [RS:0;jenkins-hbase4:46561] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,492 INFO [RS:1;jenkins-hbase4:42201] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,492 INFO [RS:2;jenkins-hbase4:37881] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,493 DEBUG [RS:2;jenkins-hbase4:37881] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,493 DEBUG [RS:2;jenkins-hbase4:37881] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,493 DEBUG [RS:2;jenkins-hbase4:37881] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,493 DEBUG [RS:2;jenkins-hbase4:37881] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,493 DEBUG [RS:2;jenkins-hbase4:37881] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,493 DEBUG [RS:2;jenkins-hbase4:37881] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 19:15:08,494 DEBUG [RS:2;jenkins-hbase4:37881] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,494 DEBUG [RS:2;jenkins-hbase4:37881] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,494 DEBUG [RS:2;jenkins-hbase4:37881] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,494 DEBUG [RS:2;jenkins-hbase4:37881] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:08,493 INFO [RS:1;jenkins-hbase4:42201] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,493 INFO [RS:0;jenkins-hbase4:46561] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,500 INFO [RS:2;jenkins-hbase4:37881] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,500 INFO [RS:2;jenkins-hbase4:37881] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,500 INFO [RS:2;jenkins-hbase4:37881] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,501 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-16 19:15:08,523 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-16 19:15:08,527 INFO [RS:2;jenkins-hbase4:37881] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 19:15:08,527 INFO [RS:0;jenkins-hbase4:46561] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 19:15:08,531 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-16 19:15:08,535 INFO [RS:2;jenkins-hbase4:37881] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37881,1689534906681-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,536 INFO [RS:0;jenkins-hbase4:46561] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46561,1689534906430-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,538 INFO [RS:1;jenkins-hbase4:42201] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 19:15:08,539 INFO [RS:1;jenkins-hbase4:42201] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42201,1689534906603-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:08,564 INFO [RS:0;jenkins-hbase4:46561] regionserver.Replication(203): jenkins-hbase4.apache.org,46561,1689534906430 started 2023-07-16 19:15:08,565 INFO [RS:1;jenkins-hbase4:42201] regionserver.Replication(203): jenkins-hbase4.apache.org,42201,1689534906603 started 2023-07-16 19:15:08,565 INFO [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46561,1689534906430, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46561, sessionid=0x1016f8f37ae0001 2023-07-16 19:15:08,565 INFO [RS:2;jenkins-hbase4:37881] regionserver.Replication(203): jenkins-hbase4.apache.org,37881,1689534906681 started 2023-07-16 19:15:08,565 INFO [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42201,1689534906603, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42201, sessionid=0x1016f8f37ae0002 2023-07-16 19:15:08,565 INFO [RS:2;jenkins-hbase4:37881] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37881,1689534906681, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37881, sessionid=0x1016f8f37ae0003 2023-07-16 19:15:08,565 DEBUG [RS:0;jenkins-hbase4:46561] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 19:15:08,565 DEBUG [RS:2;jenkins-hbase4:37881] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 19:15:08,565 DEBUG [RS:0;jenkins-hbase4:46561] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:08,565 DEBUG [RS:1;jenkins-hbase4:42201] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 19:15:08,566 DEBUG [RS:0;jenkins-hbase4:46561] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46561,1689534906430' 2023-07-16 19:15:08,565 DEBUG [RS:2;jenkins-hbase4:37881] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:08,566 DEBUG [RS:0;jenkins-hbase4:46561] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 19:15:08,566 DEBUG [RS:1;jenkins-hbase4:42201] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:08,566 DEBUG [RS:2;jenkins-hbase4:37881] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37881,1689534906681' 2023-07-16 19:15:08,568 DEBUG [RS:1;jenkins-hbase4:42201] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42201,1689534906603' 2023-07-16 19:15:08,568 DEBUG [RS:1;jenkins-hbase4:42201] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 19:15:08,568 DEBUG [RS:2;jenkins-hbase4:37881] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 19:15:08,569 DEBUG [RS:1;jenkins-hbase4:42201] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 19:15:08,569 DEBUG [RS:0;jenkins-hbase4:46561] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 19:15:08,569 DEBUG [RS:2;jenkins-hbase4:37881] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 19:15:08,570 DEBUG [RS:1;jenkins-hbase4:42201] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 19:15:08,570 DEBUG [RS:2;jenkins-hbase4:37881] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 19:15:08,570 DEBUG [RS:2;jenkins-hbase4:37881] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 19:15:08,570 DEBUG [RS:2;jenkins-hbase4:37881] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:08,570 DEBUG [RS:0;jenkins-hbase4:46561] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 19:15:08,570 DEBUG [RS:1;jenkins-hbase4:42201] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 19:15:08,570 DEBUG [RS:0;jenkins-hbase4:46561] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 19:15:08,570 DEBUG [RS:2;jenkins-hbase4:37881] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37881,1689534906681' 2023-07-16 19:15:08,571 DEBUG [RS:2;jenkins-hbase4:37881] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 19:15:08,571 DEBUG [RS:0;jenkins-hbase4:46561] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:08,571 DEBUG [RS:1;jenkins-hbase4:42201] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:08,571 DEBUG [RS:1;jenkins-hbase4:42201] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42201,1689534906603' 2023-07-16 19:15:08,571 DEBUG [RS:1;jenkins-hbase4:42201] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 19:15:08,571 DEBUG [RS:0;jenkins-hbase4:46561] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46561,1689534906430' 2023-07-16 19:15:08,572 DEBUG [RS:0;jenkins-hbase4:46561] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 19:15:08,572 DEBUG [RS:2;jenkins-hbase4:37881] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 19:15:08,572 DEBUG [RS:1;jenkins-hbase4:42201] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 19:15:08,572 DEBUG [RS:0;jenkins-hbase4:46561] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 19:15:08,572 DEBUG [RS:2;jenkins-hbase4:37881] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 19:15:08,573 INFO [RS:2;jenkins-hbase4:37881] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 19:15:08,573 DEBUG [RS:0;jenkins-hbase4:46561] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 19:15:08,573 DEBUG [RS:1;jenkins-hbase4:42201] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 19:15:08,573 INFO [RS:0;jenkins-hbase4:46561] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 19:15:08,573 INFO [RS:2;jenkins-hbase4:37881] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 19:15:08,575 INFO [RS:0;jenkins-hbase4:46561] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 19:15:08,575 INFO [RS:1;jenkins-hbase4:42201] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 19:15:08,575 INFO [RS:1;jenkins-hbase4:42201] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 19:15:08,684 DEBUG [jenkins-hbase4:38143] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-16 19:15:08,693 INFO [RS:1;jenkins-hbase4:42201] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42201%2C1689534906603, suffix=, logDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/WALs/jenkins-hbase4.apache.org,42201,1689534906603, archiveDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/oldWALs, maxLogs=32 2023-07-16 19:15:08,693 INFO [RS:2;jenkins-hbase4:37881] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37881%2C1689534906681, suffix=, logDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/WALs/jenkins-hbase4.apache.org,37881,1689534906681, archiveDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/oldWALs, maxLogs=32 2023-07-16 19:15:08,695 INFO [RS:0;jenkins-hbase4:46561] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46561%2C1689534906430, suffix=, logDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/WALs/jenkins-hbase4.apache.org,46561,1689534906430, archiveDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/oldWALs, maxLogs=32 2023-07-16 19:15:08,706 DEBUG [jenkins-hbase4:38143] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:08,707 DEBUG [jenkins-hbase4:38143] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:08,707 DEBUG [jenkins-hbase4:38143] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:08,707 DEBUG [jenkins-hbase4:38143] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:08,707 DEBUG [jenkins-hbase4:38143] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:08,712 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37881,1689534906681, state=OPENING 2023-07-16 19:15:08,727 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45821,DS-8978be2c-120e-4bac-9600-36a8b2deef6c,DISK] 2023-07-16 19:15:08,727 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46641,DS-3640afe9-07c0-497c-bcb3-0444937e269b,DISK] 2023-07-16 19:15:08,728 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46641,DS-3640afe9-07c0-497c-bcb3-0444937e269b,DISK] 2023-07-16 19:15:08,730 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-16 19:15:08,733 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:08,734 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 19:15:08,740 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:08,743 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42907,DS-98a6b168-8469-42ae-9886-ce0d071e64ba,DISK] 2023-07-16 19:15:08,744 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42907,DS-98a6b168-8469-42ae-9886-ce0d071e64ba,DISK] 2023-07-16 19:15:08,745 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45821,DS-8978be2c-120e-4bac-9600-36a8b2deef6c,DISK] 2023-07-16 19:15:08,745 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42907,DS-98a6b168-8469-42ae-9886-ce0d071e64ba,DISK] 2023-07-16 19:15:08,746 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45821,DS-8978be2c-120e-4bac-9600-36a8b2deef6c,DISK] 2023-07-16 19:15:08,746 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46641,DS-3640afe9-07c0-497c-bcb3-0444937e269b,DISK] 2023-07-16 19:15:08,763 INFO [RS:1;jenkins-hbase4:42201] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/WALs/jenkins-hbase4.apache.org,42201,1689534906603/jenkins-hbase4.apache.org%2C42201%2C1689534906603.1689534908699 2023-07-16 19:15:08,764 DEBUG [RS:1;jenkins-hbase4:42201] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45821,DS-8978be2c-120e-4bac-9600-36a8b2deef6c,DISK], DatanodeInfoWithStorage[127.0.0.1:46641,DS-3640afe9-07c0-497c-bcb3-0444937e269b,DISK], DatanodeInfoWithStorage[127.0.0.1:42907,DS-98a6b168-8469-42ae-9886-ce0d071e64ba,DISK]] 2023-07-16 19:15:08,765 INFO [RS:2;jenkins-hbase4:37881] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/WALs/jenkins-hbase4.apache.org,37881,1689534906681/jenkins-hbase4.apache.org%2C37881%2C1689534906681.1689534908699 2023-07-16 19:15:08,765 INFO [RS:0;jenkins-hbase4:46561] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/WALs/jenkins-hbase4.apache.org,46561,1689534906430/jenkins-hbase4.apache.org%2C46561%2C1689534906430.1689534908699 2023-07-16 19:15:08,765 DEBUG [RS:2;jenkins-hbase4:37881] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46641,DS-3640afe9-07c0-497c-bcb3-0444937e269b,DISK], DatanodeInfoWithStorage[127.0.0.1:45821,DS-8978be2c-120e-4bac-9600-36a8b2deef6c,DISK], DatanodeInfoWithStorage[127.0.0.1:42907,DS-98a6b168-8469-42ae-9886-ce0d071e64ba,DISK]] 2023-07-16 19:15:08,767 DEBUG [RS:0;jenkins-hbase4:46561] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46641,DS-3640afe9-07c0-497c-bcb3-0444937e269b,DISK], DatanodeInfoWithStorage[127.0.0.1:45821,DS-8978be2c-120e-4bac-9600-36a8b2deef6c,DISK], DatanodeInfoWithStorage[127.0.0.1:42907,DS-98a6b168-8469-42ae-9886-ce0d071e64ba,DISK]] 2023-07-16 19:15:08,778 WARN [ReadOnlyZKClient-127.0.0.1:50949@0x1ee633ed] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-16 19:15:08,811 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38143,1689534904450] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 19:15:08,815 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35330, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 19:15:08,816 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37881] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:35330 deadline: 1689534968816, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:08,935 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:08,939 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 19:15:08,944 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35342, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 19:15:08,957 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-16 19:15:08,957 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:08,961 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37881%2C1689534906681.meta, suffix=.meta, logDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/WALs/jenkins-hbase4.apache.org,37881,1689534906681, archiveDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/oldWALs, maxLogs=32 2023-07-16 19:15:08,981 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42907,DS-98a6b168-8469-42ae-9886-ce0d071e64ba,DISK] 2023-07-16 19:15:08,982 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45821,DS-8978be2c-120e-4bac-9600-36a8b2deef6c,DISK] 2023-07-16 19:15:08,984 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46641,DS-3640afe9-07c0-497c-bcb3-0444937e269b,DISK] 2023-07-16 19:15:08,992 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/WALs/jenkins-hbase4.apache.org,37881,1689534906681/jenkins-hbase4.apache.org%2C37881%2C1689534906681.meta.1689534908962.meta 2023-07-16 19:15:08,993 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42907,DS-98a6b168-8469-42ae-9886-ce0d071e64ba,DISK], DatanodeInfoWithStorage[127.0.0.1:45821,DS-8978be2c-120e-4bac-9600-36a8b2deef6c,DISK], DatanodeInfoWithStorage[127.0.0.1:46641,DS-3640afe9-07c0-497c-bcb3-0444937e269b,DISK]] 2023-07-16 19:15:08,993 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:08,995 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 19:15:08,998 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-16 19:15:09,000 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-16 19:15:09,009 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-16 19:15:09,010 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:09,010 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-16 19:15:09,010 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-16 19:15:09,013 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 19:15:09,015 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/info 2023-07-16 19:15:09,015 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/info 2023-07-16 19:15:09,016 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 19:15:09,016 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:09,017 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 19:15:09,018 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/rep_barrier 2023-07-16 19:15:09,018 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/rep_barrier 2023-07-16 19:15:09,018 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 19:15:09,019 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:09,019 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 19:15:09,021 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/table 2023-07-16 19:15:09,021 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/table 2023-07-16 19:15:09,021 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 19:15:09,022 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:09,024 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740 2023-07-16 19:15:09,033 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740 2023-07-16 19:15:09,037 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 19:15:09,039 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 19:15:09,041 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10702732000, jitterRate=-0.0032304078340530396}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 19:15:09,041 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 19:15:09,053 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689534908931 2023-07-16 19:15:09,078 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-16 19:15:09,079 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-16 19:15:09,080 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37881,1689534906681, state=OPEN 2023-07-16 19:15:09,083 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 19:15:09,083 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 19:15:09,088 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-16 19:15:09,088 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37881,1689534906681 in 345 msec 2023-07-16 19:15:09,095 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-16 19:15:09,095 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 588 msec 2023-07-16 19:15:09,101 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.1130 sec 2023-07-16 19:15:09,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689534909102, completionTime=-1 2023-07-16 19:15:09,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-16 19:15:09,102 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-16 19:15:09,181 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-16 19:15:09,182 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689534969182 2023-07-16 19:15:09,182 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689535029182 2023-07-16 19:15:09,182 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 79 msec 2023-07-16 19:15:09,201 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38143,1689534904450-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:09,201 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38143,1689534904450-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:09,202 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38143,1689534904450-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:09,204 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:38143, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:09,204 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:09,214 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-16 19:15:09,230 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-16 19:15:09,232 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:09,246 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-16 19:15:09,250 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:09,253 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 19:15:09,269 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:09,273 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b empty. 2023-07-16 19:15:09,275 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:09,275 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-16 19:15:09,334 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:09,337 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38143,1689534904450] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:09,345 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 34e2a05c74ec47ec61d0b84dc3cec19b, NAME => 'hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:09,346 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38143,1689534904450] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-16 19:15:09,350 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:09,362 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 19:15:09,374 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:09,375 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955 empty. 2023-07-16 19:15:09,376 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:09,377 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-16 19:15:09,424 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:09,424 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 34e2a05c74ec47ec61d0b84dc3cec19b, disabling compactions & flushes 2023-07-16 19:15:09,424 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:09,424 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:09,424 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. after waiting 0 ms 2023-07-16 19:15:09,424 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:09,424 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:09,424 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 34e2a05c74ec47ec61d0b84dc3cec19b: 2023-07-16 19:15:09,449 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 19:15:09,457 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:09,459 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2635ffdc96eb53d27ddc03fa25e81955, NAME => 'hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:09,484 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689534909454"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534909454"}]},"ts":"1689534909454"} 2023-07-16 19:15:09,488 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:09,489 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 2635ffdc96eb53d27ddc03fa25e81955, disabling compactions & flushes 2023-07-16 19:15:09,489 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:09,489 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:09,489 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. after waiting 0 ms 2023-07-16 19:15:09,489 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:09,489 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:09,489 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 2635ffdc96eb53d27ddc03fa25e81955: 2023-07-16 19:15:09,497 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 19:15:09,499 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689534909499"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534909499"}]},"ts":"1689534909499"} 2023-07-16 19:15:09,534 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 19:15:09,534 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 19:15:09,541 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 19:15:09,544 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 19:15:09,551 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534909541"}]},"ts":"1689534909541"} 2023-07-16 19:15:09,551 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534909544"}]},"ts":"1689534909544"} 2023-07-16 19:15:09,559 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-16 19:15:09,566 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:09,567 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:09,567 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:09,567 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:09,567 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:09,568 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-16 19:15:09,570 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=2635ffdc96eb53d27ddc03fa25e81955, ASSIGN}] 2023-07-16 19:15:09,573 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=2635ffdc96eb53d27ddc03fa25e81955, ASSIGN 2023-07-16 19:15:09,576 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=2635ffdc96eb53d27ddc03fa25e81955, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37881,1689534906681; forceNewPlan=false, retain=false 2023-07-16 19:15:09,577 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:09,577 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:09,577 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:09,577 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:09,577 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:09,578 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=34e2a05c74ec47ec61d0b84dc3cec19b, ASSIGN}] 2023-07-16 19:15:09,584 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=34e2a05c74ec47ec61d0b84dc3cec19b, ASSIGN 2023-07-16 19:15:09,588 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=34e2a05c74ec47ec61d0b84dc3cec19b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37881,1689534906681; forceNewPlan=false, retain=false 2023-07-16 19:15:09,589 INFO [jenkins-hbase4:38143] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-16 19:15:09,591 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=34e2a05c74ec47ec61d0b84dc3cec19b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:09,591 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=2635ffdc96eb53d27ddc03fa25e81955, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:09,591 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689534909590"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534909590"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534909590"}]},"ts":"1689534909590"} 2023-07-16 19:15:09,591 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689534909590"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534909590"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534909590"}]},"ts":"1689534909590"} 2023-07-16 19:15:09,594 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure 34e2a05c74ec47ec61d0b84dc3cec19b, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:09,599 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=6, state=RUNNABLE; OpenRegionProcedure 2635ffdc96eb53d27ddc03fa25e81955, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:09,763 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:09,763 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2635ffdc96eb53d27ddc03fa25e81955, NAME => 'hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:09,763 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 19:15:09,764 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. service=MultiRowMutationService 2023-07-16 19:15:09,765 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-16 19:15:09,766 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:09,766 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:09,766 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:09,766 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:09,771 INFO [StoreOpener-2635ffdc96eb53d27ddc03fa25e81955-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:09,773 DEBUG [StoreOpener-2635ffdc96eb53d27ddc03fa25e81955-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955/m 2023-07-16 19:15:09,773 DEBUG [StoreOpener-2635ffdc96eb53d27ddc03fa25e81955-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955/m 2023-07-16 19:15:09,774 INFO [StoreOpener-2635ffdc96eb53d27ddc03fa25e81955-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2635ffdc96eb53d27ddc03fa25e81955 columnFamilyName m 2023-07-16 19:15:09,775 INFO [StoreOpener-2635ffdc96eb53d27ddc03fa25e81955-1] regionserver.HStore(310): Store=2635ffdc96eb53d27ddc03fa25e81955/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:09,777 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:09,778 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:09,783 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:09,788 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:09,791 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2635ffdc96eb53d27ddc03fa25e81955; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@c1d33e5, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:09,791 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2635ffdc96eb53d27ddc03fa25e81955: 2023-07-16 19:15:09,793 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955., pid=9, masterSystemTime=1689534909752 2023-07-16 19:15:09,798 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:09,799 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:09,799 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:09,799 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 34e2a05c74ec47ec61d0b84dc3cec19b, NAME => 'hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:09,799 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:09,800 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:09,800 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:09,800 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:09,801 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=2635ffdc96eb53d27ddc03fa25e81955, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:09,801 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689534909800"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534909800"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534909800"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534909800"}]},"ts":"1689534909800"} 2023-07-16 19:15:09,802 INFO [StoreOpener-34e2a05c74ec47ec61d0b84dc3cec19b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:09,805 DEBUG [StoreOpener-34e2a05c74ec47ec61d0b84dc3cec19b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b/info 2023-07-16 19:15:09,805 DEBUG [StoreOpener-34e2a05c74ec47ec61d0b84dc3cec19b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b/info 2023-07-16 19:15:09,806 INFO [StoreOpener-34e2a05c74ec47ec61d0b84dc3cec19b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 34e2a05c74ec47ec61d0b84dc3cec19b columnFamilyName info 2023-07-16 19:15:09,807 INFO [StoreOpener-34e2a05c74ec47ec61d0b84dc3cec19b-1] regionserver.HStore(310): Store=34e2a05c74ec47ec61d0b84dc3cec19b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:09,808 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:09,810 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:09,817 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=6 2023-07-16 19:15:09,817 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=6, state=SUCCESS; OpenRegionProcedure 2635ffdc96eb53d27ddc03fa25e81955, server=jenkins-hbase4.apache.org,37881,1689534906681 in 208 msec 2023-07-16 19:15:09,819 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:09,823 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-16 19:15:09,823 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=2635ffdc96eb53d27ddc03fa25e81955, ASSIGN in 248 msec 2023-07-16 19:15:09,826 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 19:15:09,827 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534909826"}]},"ts":"1689534909826"} 2023-07-16 19:15:09,828 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:09,830 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 34e2a05c74ec47ec61d0b84dc3cec19b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10416166880, jitterRate=-0.029918864369392395}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:09,830 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-16 19:15:09,830 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 34e2a05c74ec47ec61d0b84dc3cec19b: 2023-07-16 19:15:09,834 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b., pid=8, masterSystemTime=1689534909752 2023-07-16 19:15:09,837 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:09,837 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:09,839 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 19:15:09,839 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=34e2a05c74ec47ec61d0b84dc3cec19b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:09,839 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689534909839"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534909839"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534909839"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534909839"}]},"ts":"1689534909839"} 2023-07-16 19:15:09,844 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 502 msec 2023-07-16 19:15:09,848 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-16 19:15:09,849 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure 34e2a05c74ec47ec61d0b84dc3cec19b, server=jenkins-hbase4.apache.org,37881,1689534906681 in 249 msec 2023-07-16 19:15:09,865 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-16 19:15:09,865 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=34e2a05c74ec47ec61d0b84dc3cec19b, ASSIGN in 271 msec 2023-07-16 19:15:09,867 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 19:15:09,867 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534909867"}]},"ts":"1689534909867"} 2023-07-16 19:15:09,870 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-16 19:15:09,879 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 19:15:09,886 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 647 msec 2023-07-16 19:15:09,888 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38143,1689534904450] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-16 19:15:09,888 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38143,1689534904450] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-16 19:15:09,951 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-16 19:15:09,953 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-16 19:15:09,954 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:09,978 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-16 19:15:09,992 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:09,992 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38143,1689534904450] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:10,016 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38143,1689534904450] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 19:15:10,017 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 19:15:10,025 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38143,1689534904450] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-16 19:15:10,038 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 58 msec 2023-07-16 19:15:10,051 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-16 19:15:10,068 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 19:15:10,076 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 24 msec 2023-07-16 19:15:10,100 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-16 19:15:10,104 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-16 19:15:10,105 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.316sec 2023-07-16 19:15:10,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-16 19:15:10,127 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-16 19:15:10,127 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-16 19:15:10,129 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38143,1689534904450-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-16 19:15:10,130 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38143,1689534904450-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-16 19:15:10,148 DEBUG [Listener at localhost/36799] zookeeper.ReadOnlyZKClient(139): Connect 0x083f3c49 to 127.0.0.1:50949 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:10,187 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-16 19:15:10,206 DEBUG [Listener at localhost/36799] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@62c69654, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:10,227 DEBUG [hconnection-0xfdeaa0f-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 19:15:10,247 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35356, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 19:15:10,262 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,38143,1689534904450 2023-07-16 19:15:10,264 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:10,278 DEBUG [Listener at localhost/36799] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-16 19:15:10,283 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41906, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-16 19:15:10,304 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-16 19:15:10,304 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:10,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38143] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-16 19:15:10,312 DEBUG [Listener at localhost/36799] zookeeper.ReadOnlyZKClient(139): Connect 0x1ad901ea to 127.0.0.1:50949 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:10,319 DEBUG [Listener at localhost/36799] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@78ef668c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:10,319 INFO [Listener at localhost/36799] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:50949 2023-07-16 19:15:10,346 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:10,380 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1016f8f37ae000a connected 2023-07-16 19:15:10,465 INFO [Listener at localhost/36799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=417, OpenFileDescriptor=672, MaxFileDescriptor=60000, SystemLoadAverage=379, ProcessCount=172, AvailableMemoryMB=3524 2023-07-16 19:15:10,469 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-16 19:15:10,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:10,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:10,589 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-16 19:15:10,605 INFO [Listener at localhost/36799] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 19:15:10,606 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:10,606 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:10,606 INFO [Listener at localhost/36799] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 19:15:10,606 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:10,606 INFO [Listener at localhost/36799] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 19:15:10,606 INFO [Listener at localhost/36799] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 19:15:10,614 INFO [Listener at localhost/36799] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35369 2023-07-16 19:15:10,615 INFO [Listener at localhost/36799] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 19:15:10,621 DEBUG [Listener at localhost/36799] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 19:15:10,623 INFO [Listener at localhost/36799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:10,635 INFO [Listener at localhost/36799] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:10,655 INFO [Listener at localhost/36799] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35369 connecting to ZooKeeper ensemble=127.0.0.1:50949 2023-07-16 19:15:10,664 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:353690x0, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:10,666 DEBUG [Listener at localhost/36799] zookeeper.ZKUtil(162): regionserver:353690x0, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 19:15:10,669 DEBUG [Listener at localhost/36799] zookeeper.ZKUtil(162): regionserver:353690x0, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-16 19:15:10,671 DEBUG [Listener at localhost/36799] zookeeper.ZKUtil(164): regionserver:353690x0, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 19:15:10,677 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35369-0x1016f8f37ae000b connected 2023-07-16 19:15:10,678 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35369 2023-07-16 19:15:10,680 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35369 2023-07-16 19:15:10,685 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35369 2023-07-16 19:15:10,694 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35369 2023-07-16 19:15:10,697 DEBUG [Listener at localhost/36799] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35369 2023-07-16 19:15:10,701 INFO [Listener at localhost/36799] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 19:15:10,701 INFO [Listener at localhost/36799] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 19:15:10,701 INFO [Listener at localhost/36799] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 19:15:10,702 INFO [Listener at localhost/36799] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 19:15:10,702 INFO [Listener at localhost/36799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 19:15:10,702 INFO [Listener at localhost/36799] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 19:15:10,703 INFO [Listener at localhost/36799] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 19:15:10,704 INFO [Listener at localhost/36799] http.HttpServer(1146): Jetty bound to port 34919 2023-07-16 19:15:10,704 INFO [Listener at localhost/36799] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:10,770 INFO [Listener at localhost/36799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:10,772 INFO [Listener at localhost/36799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3e42e83e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/hadoop.log.dir/,AVAILABLE} 2023-07-16 19:15:10,773 INFO [Listener at localhost/36799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:10,773 INFO [Listener at localhost/36799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2a8106ca{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 19:15:10,787 INFO [Listener at localhost/36799] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 19:15:10,787 INFO [Listener at localhost/36799] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 19:15:10,788 INFO [Listener at localhost/36799] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 19:15:10,788 INFO [Listener at localhost/36799] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 19:15:10,789 INFO [Listener at localhost/36799] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:10,791 INFO [Listener at localhost/36799] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@31489fd5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:10,793 INFO [Listener at localhost/36799] server.AbstractConnector(333): Started ServerConnector@6626147f{HTTP/1.1, (http/1.1)}{0.0.0.0:34919} 2023-07-16 19:15:10,793 INFO [Listener at localhost/36799] server.Server(415): Started @12508ms 2023-07-16 19:15:10,797 INFO [RS:3;jenkins-hbase4:35369] regionserver.HRegionServer(951): ClusterId : adae0019-4b04-4954-843c-248f454c2bfd 2023-07-16 19:15:10,797 DEBUG [RS:3;jenkins-hbase4:35369] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 19:15:10,800 DEBUG [RS:3;jenkins-hbase4:35369] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 19:15:10,800 DEBUG [RS:3;jenkins-hbase4:35369] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 19:15:10,803 DEBUG [RS:3;jenkins-hbase4:35369] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 19:15:10,805 DEBUG [RS:3;jenkins-hbase4:35369] zookeeper.ReadOnlyZKClient(139): Connect 0x170d7385 to 127.0.0.1:50949 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:10,829 DEBUG [RS:3;jenkins-hbase4:35369] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@19ca8f14, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:10,829 DEBUG [RS:3;jenkins-hbase4:35369] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1df518f5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 19:15:10,838 DEBUG [RS:3;jenkins-hbase4:35369] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:35369 2023-07-16 19:15:10,839 INFO [RS:3;jenkins-hbase4:35369] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 19:15:10,839 INFO [RS:3;jenkins-hbase4:35369] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 19:15:10,839 DEBUG [RS:3;jenkins-hbase4:35369] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 19:15:10,840 INFO [RS:3;jenkins-hbase4:35369] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38143,1689534904450 with isa=jenkins-hbase4.apache.org/172.31.14.131:35369, startcode=1689534910605 2023-07-16 19:15:10,840 DEBUG [RS:3;jenkins-hbase4:35369] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 19:15:10,845 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55897, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 19:15:10,846 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38143] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:10,846 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38143,1689534904450] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 19:15:10,846 DEBUG [RS:3;jenkins-hbase4:35369] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049 2023-07-16 19:15:10,846 DEBUG [RS:3;jenkins-hbase4:35369] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:34211 2023-07-16 19:15:10,846 DEBUG [RS:3;jenkins-hbase4:35369] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36849 2023-07-16 19:15:10,852 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:10,852 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:10,852 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:10,852 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:10,853 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38143,1689534904450] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:10,854 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38143,1689534904450] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 19:15:10,855 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35369,1689534910605] 2023-07-16 19:15:10,855 DEBUG [RS:3;jenkins-hbase4:35369] zookeeper.ZKUtil(162): regionserver:35369-0x1016f8f37ae000b, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:10,855 WARN [RS:3;jenkins-hbase4:35369] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 19:15:10,855 INFO [RS:3;jenkins-hbase4:35369] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:10,860 DEBUG [RS:3;jenkins-hbase4:35369] regionserver.HRegionServer(1948): logDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/WALs/jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:10,862 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38143,1689534904450] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-16 19:15:10,863 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:10,863 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:10,863 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:10,863 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:10,863 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:10,863 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:10,864 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:10,864 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:10,864 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:10,865 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:10,865 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:10,865 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:10,883 DEBUG [RS:3;jenkins-hbase4:35369] zookeeper.ZKUtil(162): regionserver:35369-0x1016f8f37ae000b, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:10,884 DEBUG [RS:3;jenkins-hbase4:35369] zookeeper.ZKUtil(162): regionserver:35369-0x1016f8f37ae000b, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:10,884 DEBUG [RS:3;jenkins-hbase4:35369] zookeeper.ZKUtil(162): regionserver:35369-0x1016f8f37ae000b, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:10,885 DEBUG [RS:3;jenkins-hbase4:35369] zookeeper.ZKUtil(162): regionserver:35369-0x1016f8f37ae000b, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:10,886 DEBUG [RS:3;jenkins-hbase4:35369] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 19:15:10,886 INFO [RS:3;jenkins-hbase4:35369] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 19:15:10,888 INFO [RS:3;jenkins-hbase4:35369] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 19:15:10,889 INFO [RS:3;jenkins-hbase4:35369] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 19:15:10,889 INFO [RS:3;jenkins-hbase4:35369] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:10,889 INFO [RS:3;jenkins-hbase4:35369] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 19:15:10,892 INFO [RS:3;jenkins-hbase4:35369] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:10,892 DEBUG [RS:3;jenkins-hbase4:35369] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:10,892 DEBUG [RS:3;jenkins-hbase4:35369] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:10,892 DEBUG [RS:3;jenkins-hbase4:35369] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:10,892 DEBUG [RS:3;jenkins-hbase4:35369] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:10,892 DEBUG [RS:3;jenkins-hbase4:35369] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:10,892 DEBUG [RS:3;jenkins-hbase4:35369] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 19:15:10,892 DEBUG [RS:3;jenkins-hbase4:35369] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:10,892 DEBUG [RS:3;jenkins-hbase4:35369] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:10,892 DEBUG [RS:3;jenkins-hbase4:35369] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:10,893 DEBUG [RS:3;jenkins-hbase4:35369] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:10,894 INFO [RS:3;jenkins-hbase4:35369] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:10,894 INFO [RS:3;jenkins-hbase4:35369] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:10,894 INFO [RS:3;jenkins-hbase4:35369] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:10,914 INFO [RS:3;jenkins-hbase4:35369] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 19:15:10,914 INFO [RS:3;jenkins-hbase4:35369] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35369,1689534910605-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:10,925 INFO [RS:3;jenkins-hbase4:35369] regionserver.Replication(203): jenkins-hbase4.apache.org,35369,1689534910605 started 2023-07-16 19:15:10,926 INFO [RS:3;jenkins-hbase4:35369] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35369,1689534910605, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35369, sessionid=0x1016f8f37ae000b 2023-07-16 19:15:10,926 DEBUG [RS:3;jenkins-hbase4:35369] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 19:15:10,926 DEBUG [RS:3;jenkins-hbase4:35369] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:10,926 DEBUG [RS:3;jenkins-hbase4:35369] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35369,1689534910605' 2023-07-16 19:15:10,926 DEBUG [RS:3;jenkins-hbase4:35369] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 19:15:10,927 DEBUG [RS:3;jenkins-hbase4:35369] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 19:15:10,927 DEBUG [RS:3;jenkins-hbase4:35369] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 19:15:10,927 DEBUG [RS:3;jenkins-hbase4:35369] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 19:15:10,927 DEBUG [RS:3;jenkins-hbase4:35369] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:10,927 DEBUG [RS:3;jenkins-hbase4:35369] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35369,1689534910605' 2023-07-16 19:15:10,927 DEBUG [RS:3;jenkins-hbase4:35369] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 19:15:10,928 DEBUG [RS:3;jenkins-hbase4:35369] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 19:15:10,929 DEBUG [RS:3;jenkins-hbase4:35369] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 19:15:10,929 INFO [RS:3;jenkins-hbase4:35369] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 19:15:10,929 INFO [RS:3;jenkins-hbase4:35369] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 19:15:10,932 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:10,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:10,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:10,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:10,947 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:10,953 DEBUG [hconnection-0x62be270e-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 19:15:10,957 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37756, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 19:15:10,984 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:10,984 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:10,997 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:10,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:10,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:41906 deadline: 1689536110996, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:10,999 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:11,001 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:11,003 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:11,003 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:11,004 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:11,009 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:11,010 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:11,011 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:11,012 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:11,014 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:11,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:11,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:11,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:11,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:11,025 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:11,029 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:11,029 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:11,033 INFO [RS:3;jenkins-hbase4:35369] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35369%2C1689534910605, suffix=, logDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/WALs/jenkins-hbase4.apache.org,35369,1689534910605, archiveDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/oldWALs, maxLogs=32 2023-07-16 19:15:11,036 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881] to rsgroup Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:11,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:11,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:11,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:11,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:11,047 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(238): Moving server region 2635ffdc96eb53d27ddc03fa25e81955, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:11,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:11,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:11,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:11,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:11,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:11,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=2635ffdc96eb53d27ddc03fa25e81955, REOPEN/MOVE 2023-07-16 19:15:11,065 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=2635ffdc96eb53d27ddc03fa25e81955, REOPEN/MOVE 2023-07-16 19:15:11,069 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(238): Moving server region 34e2a05c74ec47ec61d0b84dc3cec19b, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:11,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:11,070 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=2635ffdc96eb53d27ddc03fa25e81955, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:11,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:11,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:11,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:11,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:11,070 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689534911069"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534911069"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534911069"}]},"ts":"1689534911069"} 2023-07-16 19:15:11,084 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42907,DS-98a6b168-8469-42ae-9886-ce0d071e64ba,DISK] 2023-07-16 19:15:11,087 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46641,DS-3640afe9-07c0-497c-bcb3-0444937e269b,DISK] 2023-07-16 19:15:11,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=34e2a05c74ec47ec61d0b84dc3cec19b, REOPEN/MOVE 2023-07-16 19:15:11,089 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:11,089 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure 2635ffdc96eb53d27ddc03fa25e81955, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:11,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:11,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:11,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:11,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:11,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:11,094 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=34e2a05c74ec47ec61d0b84dc3cec19b, REOPEN/MOVE 2023-07-16 19:15:11,094 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45821,DS-8978be2c-120e-4bac-9600-36a8b2deef6c,DISK] 2023-07-16 19:15:11,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-16 19:15:11,097 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 3 region(s) to group default, current retry=0 2023-07-16 19:15:11,097 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-16 19:15:11,097 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=34e2a05c74ec47ec61d0b84dc3cec19b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:11,097 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689534911097"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534911097"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534911097"}]},"ts":"1689534911097"} 2023-07-16 19:15:11,098 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37881,1689534906681, state=CLOSING 2023-07-16 19:15:11,100 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 19:15:11,100 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; CloseRegionProcedure 34e2a05c74ec47ec61d0b84dc3cec19b, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:11,100 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:11,100 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 19:15:11,107 DEBUG [PEWorker-4] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure 2635ffdc96eb53d27ddc03fa25e81955, server=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:11,108 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=16, ppid=13, state=RUNNABLE; CloseRegionProcedure 34e2a05c74ec47ec61d0b84dc3cec19b, server=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:11,108 INFO [RS:3;jenkins-hbase4:35369] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/WALs/jenkins-hbase4.apache.org,35369,1689534910605/jenkins-hbase4.apache.org%2C35369%2C1689534910605.1689534911035 2023-07-16 19:15:11,109 DEBUG [RS:3;jenkins-hbase4:35369] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42907,DS-98a6b168-8469-42ae-9886-ce0d071e64ba,DISK], DatanodeInfoWithStorage[127.0.0.1:45821,DS-8978be2c-120e-4bac-9600-36a8b2deef6c,DISK], DatanodeInfoWithStorage[127.0.0.1:46641,DS-3640afe9-07c0-497c-bcb3-0444937e269b,DISK]] 2023-07-16 19:15:11,264 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-16 19:15:11,265 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 19:15:11,265 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 19:15:11,265 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 19:15:11,265 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 19:15:11,265 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 19:15:11,266 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.21 KB heapSize=6.16 KB 2023-07-16 19:15:11,367 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.03 KB at sequenceid=16 (bloomFilter=false), to=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/.tmp/info/88e6c644753d4bd296725eed58b68934 2023-07-16 19:15:11,444 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=16 (bloomFilter=false), to=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/.tmp/table/cbe112304b1f4f368d9bd4d395e99d85 2023-07-16 19:15:11,462 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/.tmp/info/88e6c644753d4bd296725eed58b68934 as hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/info/88e6c644753d4bd296725eed58b68934 2023-07-16 19:15:11,473 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/info/88e6c644753d4bd296725eed58b68934, entries=22, sequenceid=16, filesize=7.3 K 2023-07-16 19:15:11,480 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/.tmp/table/cbe112304b1f4f368d9bd4d395e99d85 as hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/table/cbe112304b1f4f368d9bd4d395e99d85 2023-07-16 19:15:11,496 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/table/cbe112304b1f4f368d9bd4d395e99d85, entries=4, sequenceid=16, filesize=4.8 K 2023-07-16 19:15:11,498 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.21 KB/3290, heapSize ~5.88 KB/6024, currentSize=0 B/0 for 1588230740 in 232ms, sequenceid=16, compaction requested=false 2023-07-16 19:15:11,500 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-16 19:15:11,512 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/recovered.edits/19.seqid, newMaxSeqId=19, maxSeqId=1 2023-07-16 19:15:11,513 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 19:15:11,513 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 19:15:11,514 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 19:15:11,514 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,46561,1689534906430 record at close sequenceid=16 2023-07-16 19:15:11,517 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-16 19:15:11,517 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-16 19:15:11,522 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-16 19:15:11,522 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37881,1689534906681 in 418 msec 2023-07-16 19:15:11,523 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46561,1689534906430; forceNewPlan=false, retain=false 2023-07-16 19:15:11,673 INFO [jenkins-hbase4:38143] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 19:15:11,673 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46561,1689534906430, state=OPENING 2023-07-16 19:15:11,675 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 19:15:11,675 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=15, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:11,675 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 19:15:11,830 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:11,830 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 19:15:11,834 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46608, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 19:15:11,844 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-16 19:15:11,844 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:11,847 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46561%2C1689534906430.meta, suffix=.meta, logDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/WALs/jenkins-hbase4.apache.org,46561,1689534906430, archiveDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/oldWALs, maxLogs=32 2023-07-16 19:15:11,873 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45821,DS-8978be2c-120e-4bac-9600-36a8b2deef6c,DISK] 2023-07-16 19:15:11,873 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42907,DS-98a6b168-8469-42ae-9886-ce0d071e64ba,DISK] 2023-07-16 19:15:11,878 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46641,DS-3640afe9-07c0-497c-bcb3-0444937e269b,DISK] 2023-07-16 19:15:11,887 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/WALs/jenkins-hbase4.apache.org,46561,1689534906430/jenkins-hbase4.apache.org%2C46561%2C1689534906430.meta.1689534911849.meta 2023-07-16 19:15:11,890 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45821,DS-8978be2c-120e-4bac-9600-36a8b2deef6c,DISK], DatanodeInfoWithStorage[127.0.0.1:42907,DS-98a6b168-8469-42ae-9886-ce0d071e64ba,DISK], DatanodeInfoWithStorage[127.0.0.1:46641,DS-3640afe9-07c0-497c-bcb3-0444937e269b,DISK]] 2023-07-16 19:15:11,891 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:11,891 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 19:15:11,891 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-16 19:15:11,891 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-16 19:15:11,891 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-16 19:15:11,891 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:11,892 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-16 19:15:11,892 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-16 19:15:11,894 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 19:15:11,895 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/info 2023-07-16 19:15:11,895 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/info 2023-07-16 19:15:11,896 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 19:15:11,907 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/info/88e6c644753d4bd296725eed58b68934 2023-07-16 19:15:11,908 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:11,908 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 19:15:11,910 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/rep_barrier 2023-07-16 19:15:11,910 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/rep_barrier 2023-07-16 19:15:11,910 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 19:15:11,911 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:11,911 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 19:15:11,913 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/table 2023-07-16 19:15:11,913 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/table 2023-07-16 19:15:11,913 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 19:15:11,941 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/table/cbe112304b1f4f368d9bd4d395e99d85 2023-07-16 19:15:11,941 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:11,942 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740 2023-07-16 19:15:11,945 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740 2023-07-16 19:15:11,948 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 19:15:11,950 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 19:15:11,951 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=20; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11574795680, jitterRate=0.07798685133457184}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 19:15:11,951 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 19:15:11,955 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=18, masterSystemTime=1689534911830 2023-07-16 19:15:11,959 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-16 19:15:11,960 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-16 19:15:11,960 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46561,1689534906430, state=OPEN 2023-07-16 19:15:11,962 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 19:15:11,962 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 19:15:11,966 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=15 2023-07-16 19:15:11,966 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46561,1689534906430 in 287 msec 2023-07-16 19:15:11,970 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 876 msec 2023-07-16 19:15:12,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-16 19:15:12,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:12,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2635ffdc96eb53d27ddc03fa25e81955, disabling compactions & flushes 2023-07-16 19:15:12,116 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:12,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:12,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. after waiting 0 ms 2023-07-16 19:15:12,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:12,116 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 2635ffdc96eb53d27ddc03fa25e81955 1/1 column families, dataSize=1.38 KB heapSize=2.36 KB 2023-07-16 19:15:12,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955/.tmp/m/fea01a88bea740e6967dd706d5b19135 2023-07-16 19:15:12,160 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955/.tmp/m/fea01a88bea740e6967dd706d5b19135 as hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955/m/fea01a88bea740e6967dd706d5b19135 2023-07-16 19:15:12,169 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955/m/fea01a88bea740e6967dd706d5b19135, entries=3, sequenceid=9, filesize=5.2 K 2023-07-16 19:15:12,178 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1414, heapSize ~2.34 KB/2400, currentSize=0 B/0 for 2635ffdc96eb53d27ddc03fa25e81955 in 62ms, sequenceid=9, compaction requested=false 2023-07-16 19:15:12,178 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-16 19:15:12,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-16 19:15:12,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 19:15:12,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:12,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2635ffdc96eb53d27ddc03fa25e81955: 2023-07-16 19:15:12,187 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2635ffdc96eb53d27ddc03fa25e81955 move to jenkins-hbase4.apache.org,46561,1689534906430 record at close sequenceid=9 2023-07-16 19:15:12,190 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:12,190 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:12,191 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 34e2a05c74ec47ec61d0b84dc3cec19b, disabling compactions & flushes 2023-07-16 19:15:12,192 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:12,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:12,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. after waiting 0 ms 2023-07-16 19:15:12,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:12,192 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 34e2a05c74ec47ec61d0b84dc3cec19b 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-16 19:15:12,192 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=2635ffdc96eb53d27ddc03fa25e81955, regionState=CLOSED 2023-07-16 19:15:12,192 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689534912192"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534912192"}]},"ts":"1689534912192"} 2023-07-16 19:15:12,194 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37881] ipc.CallRunner(144): callId: 41 service: ClientService methodName: Mutate size: 213 connection: 172.31.14.131:35330 deadline: 1689534972193, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46561 startCode=1689534906430. As of locationSeqNum=16. 2023-07-16 19:15:12,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b/.tmp/info/341f930baf1c4653a27c8be3813d0744 2023-07-16 19:15:12,233 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b/.tmp/info/341f930baf1c4653a27c8be3813d0744 as hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b/info/341f930baf1c4653a27c8be3813d0744 2023-07-16 19:15:12,242 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b/info/341f930baf1c4653a27c8be3813d0744, entries=2, sequenceid=6, filesize=4.8 K 2023-07-16 19:15:12,244 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 34e2a05c74ec47ec61d0b84dc3cec19b in 52ms, sequenceid=6, compaction requested=false 2023-07-16 19:15:12,244 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-16 19:15:12,257 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-16 19:15:12,258 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:12,258 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 34e2a05c74ec47ec61d0b84dc3cec19b: 2023-07-16 19:15:12,258 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 34e2a05c74ec47ec61d0b84dc3cec19b move to jenkins-hbase4.apache.org,46561,1689534906430 record at close sequenceid=6 2023-07-16 19:15:12,260 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:12,261 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=34e2a05c74ec47ec61d0b84dc3cec19b, regionState=CLOSED 2023-07-16 19:15:12,261 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689534912261"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534912261"}]},"ts":"1689534912261"} 2023-07-16 19:15:12,262 DEBUG [PEWorker-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 19:15:12,265 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46612, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 19:15:12,271 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-16 19:15:12,271 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; CloseRegionProcedure 34e2a05c74ec47ec61d0b84dc3cec19b, server=jenkins-hbase4.apache.org,37881,1689534906681 in 1.1670 sec 2023-07-16 19:15:12,272 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=34e2a05c74ec47ec61d0b84dc3cec19b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46561,1689534906430; forceNewPlan=false, retain=false 2023-07-16 19:15:12,301 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-16 19:15:12,301 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure 2635ffdc96eb53d27ddc03fa25e81955, server=jenkins-hbase4.apache.org,37881,1689534906681 in 1.2090 sec 2023-07-16 19:15:12,302 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=2635ffdc96eb53d27ddc03fa25e81955, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46561,1689534906430; forceNewPlan=false, retain=false 2023-07-16 19:15:12,302 INFO [jenkins-hbase4:38143] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-16 19:15:12,302 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=34e2a05c74ec47ec61d0b84dc3cec19b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:12,303 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689534912302"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534912302"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534912302"}]},"ts":"1689534912302"} 2023-07-16 19:15:12,304 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=2635ffdc96eb53d27ddc03fa25e81955, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:12,304 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689534912303"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534912303"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534912303"}]},"ts":"1689534912303"} 2023-07-16 19:15:12,305 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=13, state=RUNNABLE; OpenRegionProcedure 34e2a05c74ec47ec61d0b84dc3cec19b, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:12,306 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=12, state=RUNNABLE; OpenRegionProcedure 2635ffdc96eb53d27ddc03fa25e81955, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:12,463 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:12,463 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2635ffdc96eb53d27ddc03fa25e81955, NAME => 'hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:12,464 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 19:15:12,464 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. service=MultiRowMutationService 2023-07-16 19:15:12,464 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-16 19:15:12,464 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:12,464 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:12,464 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:12,464 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:12,466 INFO [StoreOpener-2635ffdc96eb53d27ddc03fa25e81955-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:12,467 DEBUG [StoreOpener-2635ffdc96eb53d27ddc03fa25e81955-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955/m 2023-07-16 19:15:12,467 DEBUG [StoreOpener-2635ffdc96eb53d27ddc03fa25e81955-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955/m 2023-07-16 19:15:12,468 INFO [StoreOpener-2635ffdc96eb53d27ddc03fa25e81955-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2635ffdc96eb53d27ddc03fa25e81955 columnFamilyName m 2023-07-16 19:15:12,477 DEBUG [StoreOpener-2635ffdc96eb53d27ddc03fa25e81955-1] regionserver.HStore(539): loaded hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955/m/fea01a88bea740e6967dd706d5b19135 2023-07-16 19:15:12,477 INFO [StoreOpener-2635ffdc96eb53d27ddc03fa25e81955-1] regionserver.HStore(310): Store=2635ffdc96eb53d27ddc03fa25e81955/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:12,478 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:12,481 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:12,485 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:12,486 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2635ffdc96eb53d27ddc03fa25e81955; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@721dbecc, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:12,486 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2635ffdc96eb53d27ddc03fa25e81955: 2023-07-16 19:15:12,489 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955., pid=20, masterSystemTime=1689534912458 2023-07-16 19:15:12,492 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:12,492 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:12,492 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:12,492 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 34e2a05c74ec47ec61d0b84dc3cec19b, NAME => 'hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:12,493 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:12,493 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:12,493 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=2635ffdc96eb53d27ddc03fa25e81955, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:12,493 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:12,493 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:12,493 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689534912493"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534912493"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534912493"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534912493"}]},"ts":"1689534912493"} 2023-07-16 19:15:12,495 INFO [StoreOpener-34e2a05c74ec47ec61d0b84dc3cec19b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:12,497 DEBUG [StoreOpener-34e2a05c74ec47ec61d0b84dc3cec19b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b/info 2023-07-16 19:15:12,497 DEBUG [StoreOpener-34e2a05c74ec47ec61d0b84dc3cec19b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b/info 2023-07-16 19:15:12,497 INFO [StoreOpener-34e2a05c74ec47ec61d0b84dc3cec19b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 34e2a05c74ec47ec61d0b84dc3cec19b columnFamilyName info 2023-07-16 19:15:12,503 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=12 2023-07-16 19:15:12,505 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=12, state=SUCCESS; OpenRegionProcedure 2635ffdc96eb53d27ddc03fa25e81955, server=jenkins-hbase4.apache.org,46561,1689534906430 in 190 msec 2023-07-16 19:15:12,508 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=2635ffdc96eb53d27ddc03fa25e81955, REOPEN/MOVE in 1.4560 sec 2023-07-16 19:15:12,511 DEBUG [StoreOpener-34e2a05c74ec47ec61d0b84dc3cec19b-1] regionserver.HStore(539): loaded hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b/info/341f930baf1c4653a27c8be3813d0744 2023-07-16 19:15:12,511 INFO [StoreOpener-34e2a05c74ec47ec61d0b84dc3cec19b-1] regionserver.HStore(310): Store=34e2a05c74ec47ec61d0b84dc3cec19b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:12,512 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:12,514 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:12,519 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:12,520 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 34e2a05c74ec47ec61d0b84dc3cec19b; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10164195840, jitterRate=-0.05338549613952637}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:12,520 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 34e2a05c74ec47ec61d0b84dc3cec19b: 2023-07-16 19:15:12,522 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b., pid=19, masterSystemTime=1689534912458 2023-07-16 19:15:12,524 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:12,524 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:12,525 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=34e2a05c74ec47ec61d0b84dc3cec19b, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:12,525 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689534912525"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534912525"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534912525"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534912525"}]},"ts":"1689534912525"} 2023-07-16 19:15:12,530 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=13 2023-07-16 19:15:12,530 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=13, state=SUCCESS; OpenRegionProcedure 34e2a05c74ec47ec61d0b84dc3cec19b, server=jenkins-hbase4.apache.org,46561,1689534906430 in 222 msec 2023-07-16 19:15:12,532 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=34e2a05c74ec47ec61d0b84dc3cec19b, REOPEN/MOVE in 1.4600 sec 2023-07-16 19:15:13,098 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35369,1689534910605, jenkins-hbase4.apache.org,37881,1689534906681] are moved back to default 2023-07-16 19:15:13,098 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:13,098 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:13,100 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37881] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:37756 deadline: 1689534973100, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46561 startCode=1689534906430. As of locationSeqNum=9. 2023-07-16 19:15:13,204 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37881] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:37756 deadline: 1689534973204, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46561 startCode=1689534906430. As of locationSeqNum=16. 2023-07-16 19:15:13,306 DEBUG [hconnection-0x62be270e-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 19:15:13,309 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46622, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 19:15:13,330 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:13,330 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:13,334 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:13,334 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:13,344 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:13,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:13,349 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:13,352 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37881] ipc.CallRunner(144): callId: 49 service: ClientService methodName: ExecService size: 619 connection: 172.31.14.131:35330 deadline: 1689534973352, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46561 startCode=1689534906430. As of locationSeqNum=9. 2023-07-16 19:15:13,355 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 21 2023-07-16 19:15:13,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 19:15:13,459 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:13,460 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:13,460 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:13,461 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:13,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 19:15:13,469 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 19:15:13,475 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:13,475 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:13,475 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:13,475 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac 2023-07-16 19:15:13,475 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:13,475 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0 empty. 2023-07-16 19:15:13,476 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227 empty. 2023-07-16 19:15:13,476 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894 empty. 2023-07-16 19:15:13,476 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1 empty. 2023-07-16 19:15:13,480 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:13,480 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac empty. 2023-07-16 19:15:13,480 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:13,480 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:13,481 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:13,481 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac 2023-07-16 19:15:13,481 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-16 19:15:13,504 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:13,506 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 69eb2862d0998ef2882d91a5e4c8b894, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:13,507 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => b956b616a12c71076477d953e21a2fc0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:13,507 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => ff104b1b58bb9182d3d487d85ba46227, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:13,550 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:13,550 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing ff104b1b58bb9182d3d487d85ba46227, disabling compactions & flushes 2023-07-16 19:15:13,550 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:13,550 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:13,550 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. after waiting 0 ms 2023-07-16 19:15:13,550 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:13,550 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:13,551 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:13,551 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing b956b616a12c71076477d953e21a2fc0, disabling compactions & flushes 2023-07-16 19:15:13,551 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:13,551 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:13,551 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. after waiting 0 ms 2023-07-16 19:15:13,551 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:13,551 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for ff104b1b58bb9182d3d487d85ba46227: 2023-07-16 19:15:13,551 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:13,551 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 69eb2862d0998ef2882d91a5e4c8b894, disabling compactions & flushes 2023-07-16 19:15:13,551 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:13,551 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:13,552 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for b956b616a12c71076477d953e21a2fc0: 2023-07-16 19:15:13,552 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:13,552 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. after waiting 0 ms 2023-07-16 19:15:13,552 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => ec932807612791da91041b18e42327ac, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:13,552 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:13,552 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:13,552 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 69eb2862d0998ef2882d91a5e4c8b894: 2023-07-16 19:15:13,552 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5e81655a4881963074c7cd34fd9ea9c1, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:13,588 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:13,588 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 5e81655a4881963074c7cd34fd9ea9c1, disabling compactions & flushes 2023-07-16 19:15:13,588 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:13,588 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:13,588 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. after waiting 0 ms 2023-07-16 19:15:13,588 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:13,588 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:13,588 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 5e81655a4881963074c7cd34fd9ea9c1: 2023-07-16 19:15:13,595 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:13,596 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing ec932807612791da91041b18e42327ac, disabling compactions & flushes 2023-07-16 19:15:13,596 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:13,596 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:13,596 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. after waiting 0 ms 2023-07-16 19:15:13,597 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:13,597 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:13,597 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for ec932807612791da91041b18e42327ac: 2023-07-16 19:15:13,601 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 19:15:13,603 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534913602"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534913602"}]},"ts":"1689534913602"} 2023-07-16 19:15:13,603 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534913602"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534913602"}]},"ts":"1689534913602"} 2023-07-16 19:15:13,603 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534913602"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534913602"}]},"ts":"1689534913602"} 2023-07-16 19:15:13,603 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534913602"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534913602"}]},"ts":"1689534913602"} 2023-07-16 19:15:13,603 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689534913341.ec932807612791da91041b18e42327ac.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534913602"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534913602"}]},"ts":"1689534913602"} 2023-07-16 19:15:13,658 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-16 19:15:13,659 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 19:15:13,660 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534913660"}]},"ts":"1689534913660"} 2023-07-16 19:15:13,662 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-16 19:15:13,668 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:13,668 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:13,668 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:13,668 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:13,669 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b956b616a12c71076477d953e21a2fc0, ASSIGN}, {pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff104b1b58bb9182d3d487d85ba46227, ASSIGN}, {pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69eb2862d0998ef2882d91a5e4c8b894, ASSIGN}, {pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ec932807612791da91041b18e42327ac, ASSIGN}, {pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5e81655a4881963074c7cd34fd9ea9c1, ASSIGN}] 2023-07-16 19:15:13,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 19:15:13,671 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff104b1b58bb9182d3d487d85ba46227, ASSIGN 2023-07-16 19:15:13,672 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69eb2862d0998ef2882d91a5e4c8b894, ASSIGN 2023-07-16 19:15:13,672 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b956b616a12c71076477d953e21a2fc0, ASSIGN 2023-07-16 19:15:13,673 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5e81655a4881963074c7cd34fd9ea9c1, ASSIGN 2023-07-16 19:15:13,674 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff104b1b58bb9182d3d487d85ba46227, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42201,1689534906603; forceNewPlan=false, retain=false 2023-07-16 19:15:13,674 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ec932807612791da91041b18e42327ac, ASSIGN 2023-07-16 19:15:13,674 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5e81655a4881963074c7cd34fd9ea9c1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46561,1689534906430; forceNewPlan=false, retain=false 2023-07-16 19:15:13,674 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69eb2862d0998ef2882d91a5e4c8b894, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42201,1689534906603; forceNewPlan=false, retain=false 2023-07-16 19:15:13,674 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b956b616a12c71076477d953e21a2fc0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46561,1689534906430; forceNewPlan=false, retain=false 2023-07-16 19:15:13,676 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ec932807612791da91041b18e42327ac, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42201,1689534906603; forceNewPlan=false, retain=false 2023-07-16 19:15:13,824 INFO [jenkins-hbase4:38143] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-16 19:15:13,828 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=ec932807612791da91041b18e42327ac, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:13,828 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=5e81655a4881963074c7cd34fd9ea9c1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:13,828 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=69eb2862d0998ef2882d91a5e4c8b894, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:13,828 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=b956b616a12c71076477d953e21a2fc0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:13,828 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534913827"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534913827"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534913827"}]},"ts":"1689534913827"} 2023-07-16 19:15:13,828 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=ff104b1b58bb9182d3d487d85ba46227, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:13,828 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534913827"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534913827"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534913827"}]},"ts":"1689534913827"} 2023-07-16 19:15:13,828 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534913827"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534913827"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534913827"}]},"ts":"1689534913827"} 2023-07-16 19:15:13,828 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534913828"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534913828"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534913828"}]},"ts":"1689534913828"} 2023-07-16 19:15:13,828 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689534913341.ec932807612791da91041b18e42327ac.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534913827"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534913827"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534913827"}]},"ts":"1689534913827"} 2023-07-16 19:15:13,831 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=26, state=RUNNABLE; OpenRegionProcedure 5e81655a4881963074c7cd34fd9ea9c1, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:13,832 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=22, state=RUNNABLE; OpenRegionProcedure b956b616a12c71076477d953e21a2fc0, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:13,833 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=24, state=RUNNABLE; OpenRegionProcedure 69eb2862d0998ef2882d91a5e4c8b894, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:13,838 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=23, state=RUNNABLE; OpenRegionProcedure ff104b1b58bb9182d3d487d85ba46227, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:13,838 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=25, state=RUNNABLE; OpenRegionProcedure ec932807612791da91041b18e42327ac, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:13,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 19:15:13,991 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:13,991 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 19:15:14,006 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:14,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5e81655a4881963074c7cd34fd9ea9c1, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-16 19:15:14,006 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53546, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 19:15:14,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:14,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:14,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:14,007 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:14,013 INFO [StoreOpener-5e81655a4881963074c7cd34fd9ea9c1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:14,015 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:14,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ff104b1b58bb9182d3d487d85ba46227, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-16 19:15:14,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:14,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:14,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:14,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:14,016 DEBUG [StoreOpener-5e81655a4881963074c7cd34fd9ea9c1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1/f 2023-07-16 19:15:14,017 DEBUG [StoreOpener-5e81655a4881963074c7cd34fd9ea9c1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1/f 2023-07-16 19:15:14,017 INFO [StoreOpener-5e81655a4881963074c7cd34fd9ea9c1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5e81655a4881963074c7cd34fd9ea9c1 columnFamilyName f 2023-07-16 19:15:14,018 INFO [StoreOpener-5e81655a4881963074c7cd34fd9ea9c1-1] regionserver.HStore(310): Store=5e81655a4881963074c7cd34fd9ea9c1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:14,019 INFO [StoreOpener-ff104b1b58bb9182d3d487d85ba46227-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:14,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:14,022 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:14,028 DEBUG [StoreOpener-ff104b1b58bb9182d3d487d85ba46227-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227/f 2023-07-16 19:15:14,028 DEBUG [StoreOpener-ff104b1b58bb9182d3d487d85ba46227-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227/f 2023-07-16 19:15:14,030 INFO [StoreOpener-ff104b1b58bb9182d3d487d85ba46227-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ff104b1b58bb9182d3d487d85ba46227 columnFamilyName f 2023-07-16 19:15:14,031 INFO [StoreOpener-ff104b1b58bb9182d3d487d85ba46227-1] regionserver.HStore(310): Store=ff104b1b58bb9182d3d487d85ba46227/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:14,031 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:14,032 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:14,043 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:14,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:14,058 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5e81655a4881963074c7cd34fd9ea9c1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9788564000, jitterRate=-0.08836893737316132}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:14,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5e81655a4881963074c7cd34fd9ea9c1: 2023-07-16 19:15:14,060 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:14,062 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1., pid=27, masterSystemTime=1689534913988 2023-07-16 19:15:14,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:14,067 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:14,067 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:14,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b956b616a12c71076477d953e21a2fc0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-16 19:15:14,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:14,069 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=5e81655a4881963074c7cd34fd9ea9c1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:14,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:14,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:14,069 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534914068"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534914068"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534914068"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534914068"}]},"ts":"1689534914068"} 2023-07-16 19:15:14,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:14,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:14,071 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ff104b1b58bb9182d3d487d85ba46227; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9463400160, jitterRate=-0.11865217983722687}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:14,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ff104b1b58bb9182d3d487d85ba46227: 2023-07-16 19:15:14,075 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227., pid=30, masterSystemTime=1689534913990 2023-07-16 19:15:14,075 INFO [StoreOpener-b956b616a12c71076477d953e21a2fc0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:14,083 DEBUG [StoreOpener-b956b616a12c71076477d953e21a2fc0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0/f 2023-07-16 19:15:14,084 DEBUG [StoreOpener-b956b616a12c71076477d953e21a2fc0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0/f 2023-07-16 19:15:14,084 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:14,085 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:14,085 INFO [StoreOpener-b956b616a12c71076477d953e21a2fc0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b956b616a12c71076477d953e21a2fc0 columnFamilyName f 2023-07-16 19:15:14,085 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:14,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 69eb2862d0998ef2882d91a5e4c8b894, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-16 19:15:14,085 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=ff104b1b58bb9182d3d487d85ba46227, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:14,086 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534914085"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534914085"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534914085"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534914085"}]},"ts":"1689534914085"} 2023-07-16 19:15:14,086 INFO [StoreOpener-b956b616a12c71076477d953e21a2fc0-1] regionserver.HStore(310): Store=b956b616a12c71076477d953e21a2fc0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:14,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:14,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:14,088 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:14,089 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=26 2023-07-16 19:15:14,090 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=26, state=SUCCESS; OpenRegionProcedure 5e81655a4881963074c7cd34fd9ea9c1, server=jenkins-hbase4.apache.org,46561,1689534906430 in 246 msec 2023-07-16 19:15:14,089 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:14,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:14,093 INFO [StoreOpener-69eb2862d0998ef2882d91a5e4c8b894-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:14,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:14,100 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5e81655a4881963074c7cd34fd9ea9c1, ASSIGN in 422 msec 2023-07-16 19:15:14,101 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=23 2023-07-16 19:15:14,101 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=23, state=SUCCESS; OpenRegionProcedure ff104b1b58bb9182d3d487d85ba46227, server=jenkins-hbase4.apache.org,42201,1689534906603 in 253 msec 2023-07-16 19:15:14,102 DEBUG [StoreOpener-69eb2862d0998ef2882d91a5e4c8b894-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894/f 2023-07-16 19:15:14,102 DEBUG [StoreOpener-69eb2862d0998ef2882d91a5e4c8b894-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894/f 2023-07-16 19:15:14,102 INFO [StoreOpener-69eb2862d0998ef2882d91a5e4c8b894-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 69eb2862d0998ef2882d91a5e4c8b894 columnFamilyName f 2023-07-16 19:15:14,103 INFO [StoreOpener-69eb2862d0998ef2882d91a5e4c8b894-1] regionserver.HStore(310): Store=69eb2862d0998ef2882d91a5e4c8b894/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:14,104 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff104b1b58bb9182d3d487d85ba46227, ASSIGN in 433 msec 2023-07-16 19:15:14,105 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:14,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:14,111 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:14,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:14,116 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b956b616a12c71076477d953e21a2fc0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11870799040, jitterRate=0.10555431246757507}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:14,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:14,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b956b616a12c71076477d953e21a2fc0: 2023-07-16 19:15:14,119 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0., pid=28, masterSystemTime=1689534913988 2023-07-16 19:15:14,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:14,121 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 69eb2862d0998ef2882d91a5e4c8b894; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9798482720, jitterRate=-0.08744518458843231}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:14,121 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 69eb2862d0998ef2882d91a5e4c8b894: 2023-07-16 19:15:14,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:14,122 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:14,122 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894., pid=29, masterSystemTime=1689534913990 2023-07-16 19:15:14,123 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=b956b616a12c71076477d953e21a2fc0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:14,123 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534914123"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534914123"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534914123"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534914123"}]},"ts":"1689534914123"} 2023-07-16 19:15:14,125 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:14,125 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:14,126 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:14,126 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ec932807612791da91041b18e42327ac, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-16 19:15:14,126 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=69eb2862d0998ef2882d91a5e4c8b894, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:14,127 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534914126"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534914126"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534914126"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534914126"}]},"ts":"1689534914126"} 2023-07-16 19:15:14,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ec932807612791da91041b18e42327ac 2023-07-16 19:15:14,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:14,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ec932807612791da91041b18e42327ac 2023-07-16 19:15:14,128 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ec932807612791da91041b18e42327ac 2023-07-16 19:15:14,130 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=22 2023-07-16 19:15:14,131 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=22, state=SUCCESS; OpenRegionProcedure b956b616a12c71076477d953e21a2fc0, server=jenkins-hbase4.apache.org,46561,1689534906430 in 294 msec 2023-07-16 19:15:14,133 INFO [StoreOpener-ec932807612791da91041b18e42327ac-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ec932807612791da91041b18e42327ac 2023-07-16 19:15:14,135 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b956b616a12c71076477d953e21a2fc0, ASSIGN in 462 msec 2023-07-16 19:15:14,136 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=24 2023-07-16 19:15:14,136 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=24, state=SUCCESS; OpenRegionProcedure 69eb2862d0998ef2882d91a5e4c8b894, server=jenkins-hbase4.apache.org,42201,1689534906603 in 297 msec 2023-07-16 19:15:14,137 DEBUG [StoreOpener-ec932807612791da91041b18e42327ac-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac/f 2023-07-16 19:15:14,137 DEBUG [StoreOpener-ec932807612791da91041b18e42327ac-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac/f 2023-07-16 19:15:14,137 INFO [StoreOpener-ec932807612791da91041b18e42327ac-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ec932807612791da91041b18e42327ac columnFamilyName f 2023-07-16 19:15:14,138 INFO [StoreOpener-ec932807612791da91041b18e42327ac-1] regionserver.HStore(310): Store=ec932807612791da91041b18e42327ac/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:14,138 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69eb2862d0998ef2882d91a5e4c8b894, ASSIGN in 468 msec 2023-07-16 19:15:14,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac 2023-07-16 19:15:14,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac 2023-07-16 19:15:14,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ec932807612791da91041b18e42327ac 2023-07-16 19:15:14,147 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:14,147 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ec932807612791da91041b18e42327ac; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10586976160, jitterRate=-0.014011010527610779}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:14,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ec932807612791da91041b18e42327ac: 2023-07-16 19:15:14,148 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac., pid=31, masterSystemTime=1689534913990 2023-07-16 19:15:14,150 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:14,150 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:14,151 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=ec932807612791da91041b18e42327ac, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:14,151 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689534913341.ec932807612791da91041b18e42327ac.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534914151"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534914151"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534914151"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534914151"}]},"ts":"1689534914151"} 2023-07-16 19:15:14,157 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=25 2023-07-16 19:15:14,157 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=25, state=SUCCESS; OpenRegionProcedure ec932807612791da91041b18e42327ac, server=jenkins-hbase4.apache.org,42201,1689534906603 in 316 msec 2023-07-16 19:15:14,161 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=21 2023-07-16 19:15:14,161 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ec932807612791da91041b18e42327ac, ASSIGN in 489 msec 2023-07-16 19:15:14,162 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 19:15:14,163 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534914163"}]},"ts":"1689534914163"} 2023-07-16 19:15:14,165 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-16 19:15:14,169 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 19:15:14,172 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 824 msec 2023-07-16 19:15:14,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 19:15:14,474 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 21 completed 2023-07-16 19:15:14,475 DEBUG [Listener at localhost/36799] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-16 19:15:14,476 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:14,477 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37881] ipc.CallRunner(144): callId: 49 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:35356 deadline: 1689534974477, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46561 startCode=1689534906430. As of locationSeqNum=16. 2023-07-16 19:15:14,579 DEBUG [hconnection-0xfdeaa0f-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 19:15:14,583 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46626, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 19:15:14,592 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-16 19:15:14,592 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:14,593 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-16 19:15:14,593 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:14,598 DEBUG [Listener at localhost/36799] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 19:15:14,602 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46760, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 19:15:14,604 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 19:15:14,605 DEBUG [Listener at localhost/36799] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 19:15:14,609 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37768, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 19:15:14,610 DEBUG [Listener at localhost/36799] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 19:15:14,622 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53558, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 19:15:14,624 DEBUG [Listener at localhost/36799] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 19:15:14,637 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46640, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 19:15:14,650 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:14,651 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:14,652 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:14,669 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:14,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:14,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:14,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:14,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:14,688 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:14,688 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(345): Moving region b956b616a12c71076477d953e21a2fc0 to RSGroup Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:14,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:14,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:14,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:14,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:14,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:14,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b956b616a12c71076477d953e21a2fc0, REOPEN/MOVE 2023-07-16 19:15:14,690 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(345): Moving region ff104b1b58bb9182d3d487d85ba46227 to RSGroup Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:14,691 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b956b616a12c71076477d953e21a2fc0, REOPEN/MOVE 2023-07-16 19:15:14,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:14,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:14,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:14,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:14,693 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b956b616a12c71076477d953e21a2fc0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:14,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:14,694 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534914693"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534914693"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534914693"}]},"ts":"1689534914693"} 2023-07-16 19:15:14,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff104b1b58bb9182d3d487d85ba46227, REOPEN/MOVE 2023-07-16 19:15:14,696 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=32, state=RUNNABLE; CloseRegionProcedure b956b616a12c71076477d953e21a2fc0, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:14,697 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(345): Moving region 69eb2862d0998ef2882d91a5e4c8b894 to RSGroup Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:14,697 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff104b1b58bb9182d3d487d85ba46227, REOPEN/MOVE 2023-07-16 19:15:14,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:14,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:14,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:14,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:14,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:14,699 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=ff104b1b58bb9182d3d487d85ba46227, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:14,699 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534914699"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534914699"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534914699"}]},"ts":"1689534914699"} 2023-07-16 19:15:14,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69eb2862d0998ef2882d91a5e4c8b894, REOPEN/MOVE 2023-07-16 19:15:14,700 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(345): Moving region ec932807612791da91041b18e42327ac to RSGroup Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:14,701 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69eb2862d0998ef2882d91a5e4c8b894, REOPEN/MOVE 2023-07-16 19:15:14,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:14,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:14,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:14,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:14,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:14,703 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-16 19:15:14,703 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=33, state=RUNNABLE; CloseRegionProcedure ff104b1b58bb9182d3d487d85ba46227, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:14,703 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=69eb2862d0998ef2882d91a5e4c8b894, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:14,703 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534914703"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534914703"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534914703"}]},"ts":"1689534914703"} 2023-07-16 19:15:14,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ec932807612791da91041b18e42327ac, REOPEN/MOVE 2023-07-16 19:15:14,707 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ec932807612791da91041b18e42327ac, REOPEN/MOVE 2023-07-16 19:15:14,707 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(345): Moving region 5e81655a4881963074c7cd34fd9ea9c1 to RSGroup Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:14,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:14,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:14,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:14,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:14,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:14,709 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=35, state=RUNNABLE; CloseRegionProcedure 69eb2862d0998ef2882d91a5e4c8b894, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:14,710 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 19:15:14,711 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-16 19:15:14,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5e81655a4881963074c7cd34fd9ea9c1, REOPEN/MOVE 2023-07-16 19:15:14,711 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=ec932807612791da91041b18e42327ac, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:14,711 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_716561459, current retry=0 2023-07-16 19:15:14,715 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5e81655a4881963074c7cd34fd9ea9c1, REOPEN/MOVE 2023-07-16 19:15:14,716 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 19:15:14,716 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-16 19:15:14,716 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689534913341.ec932807612791da91041b18e42327ac.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534914711"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534914711"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534914711"}]},"ts":"1689534914711"} 2023-07-16 19:15:14,717 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 19:15:14,717 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-16 19:15:14,717 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=5e81655a4881963074c7cd34fd9ea9c1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:14,717 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534914717"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534914717"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534914717"}]},"ts":"1689534914717"} 2023-07-16 19:15:14,719 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=36, state=RUNNABLE; CloseRegionProcedure ec932807612791da91041b18e42327ac, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:14,725 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=38, state=RUNNABLE; CloseRegionProcedure 5e81655a4881963074c7cd34fd9ea9c1, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:14,855 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:14,857 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5e81655a4881963074c7cd34fd9ea9c1, disabling compactions & flushes 2023-07-16 19:15:14,857 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:14,857 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:14,857 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. after waiting 0 ms 2023-07-16 19:15:14,857 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:14,884 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:14,885 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ff104b1b58bb9182d3d487d85ba46227, disabling compactions & flushes 2023-07-16 19:15:14,885 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:14,885 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:14,885 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. after waiting 0 ms 2023-07-16 19:15:14,885 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:14,893 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:14,894 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:14,895 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5e81655a4881963074c7cd34fd9ea9c1: 2023-07-16 19:15:14,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5e81655a4881963074c7cd34fd9ea9c1 move to jenkins-hbase4.apache.org,37881,1689534906681 record at close sequenceid=2 2023-07-16 19:15:14,899 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:14,899 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:14,900 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=5e81655a4881963074c7cd34fd9ea9c1, regionState=CLOSED 2023-07-16 19:15:14,900 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534914900"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534914900"}]},"ts":"1689534914900"} 2023-07-16 19:15:14,905 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=38 2023-07-16 19:15:14,905 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=38, state=SUCCESS; CloseRegionProcedure 5e81655a4881963074c7cd34fd9ea9c1, server=jenkins-hbase4.apache.org,46561,1689534906430 in 177 msec 2023-07-16 19:15:14,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b956b616a12c71076477d953e21a2fc0, disabling compactions & flushes 2023-07-16 19:15:14,912 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:14,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:14,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. after waiting 0 ms 2023-07-16 19:15:14,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:14,912 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=38, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5e81655a4881963074c7cd34fd9ea9c1, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37881,1689534906681; forceNewPlan=false, retain=false 2023-07-16 19:15:14,914 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:14,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:14,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ff104b1b58bb9182d3d487d85ba46227: 2023-07-16 19:15:14,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ff104b1b58bb9182d3d487d85ba46227 move to jenkins-hbase4.apache.org,35369,1689534910605 record at close sequenceid=2 2023-07-16 19:15:14,930 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=ff104b1b58bb9182d3d487d85ba46227, regionState=CLOSED 2023-07-16 19:15:14,930 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534914929"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534914929"}]},"ts":"1689534914929"} 2023-07-16 19:15:14,930 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:14,930 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:14,936 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=33 2023-07-16 19:15:14,936 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=33, state=SUCCESS; CloseRegionProcedure ff104b1b58bb9182d3d487d85ba46227, server=jenkins-hbase4.apache.org,42201,1689534906603 in 229 msec 2023-07-16 19:15:14,937 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff104b1b58bb9182d3d487d85ba46227, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35369,1689534910605; forceNewPlan=false, retain=false 2023-07-16 19:15:14,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 69eb2862d0998ef2882d91a5e4c8b894, disabling compactions & flushes 2023-07-16 19:15:14,940 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:14,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:14,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. after waiting 0 ms 2023-07-16 19:15:14,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:14,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:14,949 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:14,949 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b956b616a12c71076477d953e21a2fc0: 2023-07-16 19:15:14,949 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b956b616a12c71076477d953e21a2fc0 move to jenkins-hbase4.apache.org,37881,1689534906681 record at close sequenceid=2 2023-07-16 19:15:14,952 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:14,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:14,957 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b956b616a12c71076477d953e21a2fc0, regionState=CLOSED 2023-07-16 19:15:14,958 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534914957"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534914957"}]},"ts":"1689534914957"} 2023-07-16 19:15:14,959 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:14,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 69eb2862d0998ef2882d91a5e4c8b894: 2023-07-16 19:15:14,959 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 69eb2862d0998ef2882d91a5e4c8b894 move to jenkins-hbase4.apache.org,35369,1689534910605 record at close sequenceid=2 2023-07-16 19:15:14,961 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:14,961 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ec932807612791da91041b18e42327ac 2023-07-16 19:15:14,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ec932807612791da91041b18e42327ac, disabling compactions & flushes 2023-07-16 19:15:14,962 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:14,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:14,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. after waiting 0 ms 2023-07-16 19:15:14,962 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:14,963 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=69eb2862d0998ef2882d91a5e4c8b894, regionState=CLOSED 2023-07-16 19:15:14,964 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534914963"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534914963"}]},"ts":"1689534914963"} 2023-07-16 19:15:14,967 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=32 2023-07-16 19:15:14,968 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=32, state=SUCCESS; CloseRegionProcedure b956b616a12c71076477d953e21a2fc0, server=jenkins-hbase4.apache.org,46561,1689534906430 in 266 msec 2023-07-16 19:15:14,970 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b956b616a12c71076477d953e21a2fc0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37881,1689534906681; forceNewPlan=false, retain=false 2023-07-16 19:15:14,971 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=35 2023-07-16 19:15:14,971 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=35, state=SUCCESS; CloseRegionProcedure 69eb2862d0998ef2882d91a5e4c8b894, server=jenkins-hbase4.apache.org,42201,1689534906603 in 257 msec 2023-07-16 19:15:14,975 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69eb2862d0998ef2882d91a5e4c8b894, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35369,1689534910605; forceNewPlan=false, retain=false 2023-07-16 19:15:14,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:14,978 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:14,978 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ec932807612791da91041b18e42327ac: 2023-07-16 19:15:14,978 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ec932807612791da91041b18e42327ac move to jenkins-hbase4.apache.org,35369,1689534910605 record at close sequenceid=2 2023-07-16 19:15:14,980 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ec932807612791da91041b18e42327ac 2023-07-16 19:15:14,981 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=ec932807612791da91041b18e42327ac, regionState=CLOSED 2023-07-16 19:15:14,981 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689534913341.ec932807612791da91041b18e42327ac.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534914981"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534914981"}]},"ts":"1689534914981"} 2023-07-16 19:15:14,986 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=36 2023-07-16 19:15:14,986 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=36, state=SUCCESS; CloseRegionProcedure ec932807612791da91041b18e42327ac, server=jenkins-hbase4.apache.org,42201,1689534906603 in 264 msec 2023-07-16 19:15:14,991 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ec932807612791da91041b18e42327ac, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35369,1689534910605; forceNewPlan=false, retain=false 2023-07-16 19:15:15,063 INFO [jenkins-hbase4:38143] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-16 19:15:15,063 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b956b616a12c71076477d953e21a2fc0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:15,063 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=ff104b1b58bb9182d3d487d85ba46227, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:15,063 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534915063"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534915063"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534915063"}]},"ts":"1689534915063"} 2023-07-16 19:15:15,063 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=69eb2862d0998ef2882d91a5e4c8b894, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:15,063 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534915063"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534915063"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534915063"}]},"ts":"1689534915063"} 2023-07-16 19:15:15,063 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=ec932807612791da91041b18e42327ac, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:15,064 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534915063"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534915063"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534915063"}]},"ts":"1689534915063"} 2023-07-16 19:15:15,063 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=5e81655a4881963074c7cd34fd9ea9c1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:15,064 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534915063"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534915063"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534915063"}]},"ts":"1689534915063"} 2023-07-16 19:15:15,064 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689534913341.ec932807612791da91041b18e42327ac.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534915063"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534915063"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534915063"}]},"ts":"1689534915063"} 2023-07-16 19:15:15,066 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=32, state=RUNNABLE; OpenRegionProcedure b956b616a12c71076477d953e21a2fc0, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:15,067 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=33, state=RUNNABLE; OpenRegionProcedure ff104b1b58bb9182d3d487d85ba46227, server=jenkins-hbase4.apache.org,35369,1689534910605}] 2023-07-16 19:15:15,070 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=35, state=RUNNABLE; OpenRegionProcedure 69eb2862d0998ef2882d91a5e4c8b894, server=jenkins-hbase4.apache.org,35369,1689534910605}] 2023-07-16 19:15:15,071 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=38, state=RUNNABLE; OpenRegionProcedure 5e81655a4881963074c7cd34fd9ea9c1, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:15,073 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=36, state=RUNNABLE; OpenRegionProcedure ec932807612791da91041b18e42327ac, server=jenkins-hbase4.apache.org,35369,1689534910605}] 2023-07-16 19:15:15,223 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:15,223 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 19:15:15,225 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46770, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 19:15:15,227 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:15,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b956b616a12c71076477d953e21a2fc0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-16 19:15:15,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:15,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:15,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:15,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:15,229 INFO [StoreOpener-b956b616a12c71076477d953e21a2fc0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:15,232 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:15,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ff104b1b58bb9182d3d487d85ba46227, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-16 19:15:15,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:15,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:15,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:15,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:15,233 DEBUG [StoreOpener-b956b616a12c71076477d953e21a2fc0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0/f 2023-07-16 19:15:15,233 DEBUG [StoreOpener-b956b616a12c71076477d953e21a2fc0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0/f 2023-07-16 19:15:15,234 INFO [StoreOpener-b956b616a12c71076477d953e21a2fc0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b956b616a12c71076477d953e21a2fc0 columnFamilyName f 2023-07-16 19:15:15,236 INFO [StoreOpener-b956b616a12c71076477d953e21a2fc0-1] regionserver.HStore(310): Store=b956b616a12c71076477d953e21a2fc0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:15,237 INFO [StoreOpener-ff104b1b58bb9182d3d487d85ba46227-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:15,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:15,240 DEBUG [StoreOpener-ff104b1b58bb9182d3d487d85ba46227-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227/f 2023-07-16 19:15:15,240 DEBUG [StoreOpener-ff104b1b58bb9182d3d487d85ba46227-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227/f 2023-07-16 19:15:15,240 INFO [StoreOpener-ff104b1b58bb9182d3d487d85ba46227-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ff104b1b58bb9182d3d487d85ba46227 columnFamilyName f 2023-07-16 19:15:15,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:15,241 INFO [StoreOpener-ff104b1b58bb9182d3d487d85ba46227-1] regionserver.HStore(310): Store=ff104b1b58bb9182d3d487d85ba46227/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:15,243 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:15,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:15,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:15,246 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b956b616a12c71076477d953e21a2fc0; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11728444800, jitterRate=0.0922965407371521}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:15,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b956b616a12c71076477d953e21a2fc0: 2023-07-16 19:15:15,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:15,249 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ff104b1b58bb9182d3d487d85ba46227; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10791507200, jitterRate=0.005037426948547363}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:15,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ff104b1b58bb9182d3d487d85ba46227: 2023-07-16 19:15:15,250 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0., pid=42, masterSystemTime=1689534915221 2023-07-16 19:15:15,251 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227., pid=43, masterSystemTime=1689534915223 2023-07-16 19:15:15,255 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:15,255 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:15,255 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:15,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5e81655a4881963074c7cd34fd9ea9c1, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-16 19:15:15,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:15,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:15,256 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=b956b616a12c71076477d953e21a2fc0, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:15,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:15,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:15,256 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534915256"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534915256"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534915256"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534915256"}]},"ts":"1689534915256"} 2023-07-16 19:15:15,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:15,257 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:15,258 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:15,258 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 69eb2862d0998ef2882d91a5e4c8b894, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-16 19:15:15,258 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=ff104b1b58bb9182d3d487d85ba46227, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:15,259 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534915258"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534915258"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534915258"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534915258"}]},"ts":"1689534915258"} 2023-07-16 19:15:15,258 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:15,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:15,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:15,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:15,263 INFO [StoreOpener-69eb2862d0998ef2882d91a5e4c8b894-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:15,264 INFO [StoreOpener-5e81655a4881963074c7cd34fd9ea9c1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:15,265 DEBUG [StoreOpener-69eb2862d0998ef2882d91a5e4c8b894-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894/f 2023-07-16 19:15:15,265 DEBUG [StoreOpener-69eb2862d0998ef2882d91a5e4c8b894-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894/f 2023-07-16 19:15:15,265 DEBUG [StoreOpener-5e81655a4881963074c7cd34fd9ea9c1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1/f 2023-07-16 19:15:15,265 DEBUG [StoreOpener-5e81655a4881963074c7cd34fd9ea9c1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1/f 2023-07-16 19:15:15,265 INFO [StoreOpener-69eb2862d0998ef2882d91a5e4c8b894-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 69eb2862d0998ef2882d91a5e4c8b894 columnFamilyName f 2023-07-16 19:15:15,265 INFO [StoreOpener-5e81655a4881963074c7cd34fd9ea9c1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5e81655a4881963074c7cd34fd9ea9c1 columnFamilyName f 2023-07-16 19:15:15,266 INFO [StoreOpener-5e81655a4881963074c7cd34fd9ea9c1-1] regionserver.HStore(310): Store=5e81655a4881963074c7cd34fd9ea9c1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:15,266 INFO [StoreOpener-69eb2862d0998ef2882d91a5e4c8b894-1] regionserver.HStore(310): Store=69eb2862d0998ef2882d91a5e4c8b894/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:15,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:15,269 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:15,272 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=32 2023-07-16 19:15:15,272 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=32, state=SUCCESS; OpenRegionProcedure b956b616a12c71076477d953e21a2fc0, server=jenkins-hbase4.apache.org,37881,1689534906681 in 195 msec 2023-07-16 19:15:15,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:15,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:15,274 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=33 2023-07-16 19:15:15,275 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=33, state=SUCCESS; OpenRegionProcedure ff104b1b58bb9182d3d487d85ba46227, server=jenkins-hbase4.apache.org,35369,1689534910605 in 195 msec 2023-07-16 19:15:15,277 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b956b616a12c71076477d953e21a2fc0, REOPEN/MOVE in 583 msec 2023-07-16 19:15:15,279 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff104b1b58bb9182d3d487d85ba46227, REOPEN/MOVE in 581 msec 2023-07-16 19:15:15,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:15,282 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5e81655a4881963074c7cd34fd9ea9c1; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9967924480, jitterRate=-0.07166469097137451}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:15,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5e81655a4881963074c7cd34fd9ea9c1: 2023-07-16 19:15:15,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:15,283 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1., pid=45, masterSystemTime=1689534915221 2023-07-16 19:15:15,284 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 69eb2862d0998ef2882d91a5e4c8b894; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9753196480, jitterRate=-0.09166279435157776}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:15,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 69eb2862d0998ef2882d91a5e4c8b894: 2023-07-16 19:15:15,285 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894., pid=44, masterSystemTime=1689534915223 2023-07-16 19:15:15,288 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=38 updating hbase:meta row=5e81655a4881963074c7cd34fd9ea9c1, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:15,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:15,289 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534915288"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534915288"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534915288"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534915288"}]},"ts":"1689534915288"} 2023-07-16 19:15:15,289 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:15,289 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:15,290 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ec932807612791da91041b18e42327ac, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-16 19:15:15,290 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ec932807612791da91041b18e42327ac 2023-07-16 19:15:15,290 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=69eb2862d0998ef2882d91a5e4c8b894, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:15,293 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:15,290 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:15,294 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534915290"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534915290"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534915290"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534915290"}]},"ts":"1689534915290"} 2023-07-16 19:15:15,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ec932807612791da91041b18e42327ac 2023-07-16 19:15:15,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ec932807612791da91041b18e42327ac 2023-07-16 19:15:15,294 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:15,300 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=38 2023-07-16 19:15:15,300 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=38, state=SUCCESS; OpenRegionProcedure 5e81655a4881963074c7cd34fd9ea9c1, server=jenkins-hbase4.apache.org,37881,1689534906681 in 225 msec 2023-07-16 19:15:15,301 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=35 2023-07-16 19:15:15,302 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=35, state=SUCCESS; OpenRegionProcedure 69eb2862d0998ef2882d91a5e4c8b894, server=jenkins-hbase4.apache.org,35369,1689534910605 in 228 msec 2023-07-16 19:15:15,303 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5e81655a4881963074c7cd34fd9ea9c1, REOPEN/MOVE in 592 msec 2023-07-16 19:15:15,304 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69eb2862d0998ef2882d91a5e4c8b894, REOPEN/MOVE in 604 msec 2023-07-16 19:15:15,307 INFO [StoreOpener-ec932807612791da91041b18e42327ac-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ec932807612791da91041b18e42327ac 2023-07-16 19:15:15,308 DEBUG [StoreOpener-ec932807612791da91041b18e42327ac-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac/f 2023-07-16 19:15:15,309 DEBUG [StoreOpener-ec932807612791da91041b18e42327ac-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac/f 2023-07-16 19:15:15,309 INFO [StoreOpener-ec932807612791da91041b18e42327ac-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ec932807612791da91041b18e42327ac columnFamilyName f 2023-07-16 19:15:15,310 INFO [StoreOpener-ec932807612791da91041b18e42327ac-1] regionserver.HStore(310): Store=ec932807612791da91041b18e42327ac/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:15,311 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac 2023-07-16 19:15:15,313 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac 2023-07-16 19:15:15,317 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ec932807612791da91041b18e42327ac 2023-07-16 19:15:15,318 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ec932807612791da91041b18e42327ac; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10633630720, jitterRate=-0.009665966033935547}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:15,318 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ec932807612791da91041b18e42327ac: 2023-07-16 19:15:15,319 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac., pid=46, masterSystemTime=1689534915223 2023-07-16 19:15:15,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:15,321 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:15,322 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=ec932807612791da91041b18e42327ac, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:15,322 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689534913341.ec932807612791da91041b18e42327ac.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534915322"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534915322"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534915322"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534915322"}]},"ts":"1689534915322"} 2023-07-16 19:15:15,327 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=36 2023-07-16 19:15:15,327 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=36, state=SUCCESS; OpenRegionProcedure ec932807612791da91041b18e42327ac, server=jenkins-hbase4.apache.org,35369,1689534910605 in 251 msec 2023-07-16 19:15:15,328 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=36, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ec932807612791da91041b18e42327ac, REOPEN/MOVE in 626 msec 2023-07-16 19:15:15,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure.ProcedureSyncWait(216): waitFor pid=32 2023-07-16 19:15:15,716 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_716561459. 2023-07-16 19:15:15,716 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:15,721 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:15,722 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:15,726 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:15,726 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:15,727 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:15,734 INFO [Listener at localhost/36799] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:15,740 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:15,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=47, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:15,753 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534915753"}]},"ts":"1689534915753"} 2023-07-16 19:15:15,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-16 19:15:15,755 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-16 19:15:15,757 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-16 19:15:15,763 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b956b616a12c71076477d953e21a2fc0, UNASSIGN}, {pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff104b1b58bb9182d3d487d85ba46227, UNASSIGN}, {pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69eb2862d0998ef2882d91a5e4c8b894, UNASSIGN}, {pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ec932807612791da91041b18e42327ac, UNASSIGN}, {pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5e81655a4881963074c7cd34fd9ea9c1, UNASSIGN}] 2023-07-16 19:15:15,766 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff104b1b58bb9182d3d487d85ba46227, UNASSIGN 2023-07-16 19:15:15,767 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b956b616a12c71076477d953e21a2fc0, UNASSIGN 2023-07-16 19:15:15,767 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69eb2862d0998ef2882d91a5e4c8b894, UNASSIGN 2023-07-16 19:15:15,767 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5e81655a4881963074c7cd34fd9ea9c1, UNASSIGN 2023-07-16 19:15:15,768 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ec932807612791da91041b18e42327ac, UNASSIGN 2023-07-16 19:15:15,768 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=ff104b1b58bb9182d3d487d85ba46227, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:15,769 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534915768"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534915768"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534915768"}]},"ts":"1689534915768"} 2023-07-16 19:15:15,769 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=b956b616a12c71076477d953e21a2fc0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:15,769 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534915769"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534915769"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534915769"}]},"ts":"1689534915769"} 2023-07-16 19:15:15,772 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=69eb2862d0998ef2882d91a5e4c8b894, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:15,772 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=ec932807612791da91041b18e42327ac, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:15,772 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534915772"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534915772"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534915772"}]},"ts":"1689534915772"} 2023-07-16 19:15:15,772 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=5e81655a4881963074c7cd34fd9ea9c1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:15,772 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534915772"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534915772"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534915772"}]},"ts":"1689534915772"} 2023-07-16 19:15:15,773 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=49, state=RUNNABLE; CloseRegionProcedure ff104b1b58bb9182d3d487d85ba46227, server=jenkins-hbase4.apache.org,35369,1689534910605}] 2023-07-16 19:15:15,772 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689534913341.ec932807612791da91041b18e42327ac.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534915772"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534915772"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534915772"}]},"ts":"1689534915772"} 2023-07-16 19:15:15,774 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=48, state=RUNNABLE; CloseRegionProcedure b956b616a12c71076477d953e21a2fc0, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:15,779 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=51, state=RUNNABLE; CloseRegionProcedure ec932807612791da91041b18e42327ac, server=jenkins-hbase4.apache.org,35369,1689534910605}] 2023-07-16 19:15:15,779 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=50, state=RUNNABLE; CloseRegionProcedure 69eb2862d0998ef2882d91a5e4c8b894, server=jenkins-hbase4.apache.org,35369,1689534910605}] 2023-07-16 19:15:15,780 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=52, state=RUNNABLE; CloseRegionProcedure 5e81655a4881963074c7cd34fd9ea9c1, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:15,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-16 19:15:15,927 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:15,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 69eb2862d0998ef2882d91a5e4c8b894, disabling compactions & flushes 2023-07-16 19:15:15,928 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:15,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:15,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. after waiting 0 ms 2023-07-16 19:15:15,928 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:15,929 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:15,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b956b616a12c71076477d953e21a2fc0, disabling compactions & flushes 2023-07-16 19:15:15,930 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:15,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:15,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. after waiting 0 ms 2023-07-16 19:15:15,930 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:15,938 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 19:15:15,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 19:15:15,941 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894. 2023-07-16 19:15:15,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 69eb2862d0998ef2882d91a5e4c8b894: 2023-07-16 19:15:15,942 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0. 2023-07-16 19:15:15,942 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b956b616a12c71076477d953e21a2fc0: 2023-07-16 19:15:15,943 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:15,944 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:15,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ff104b1b58bb9182d3d487d85ba46227, disabling compactions & flushes 2023-07-16 19:15:15,945 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:15,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:15,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. after waiting 0 ms 2023-07-16 19:15:15,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:15,945 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=69eb2862d0998ef2882d91a5e4c8b894, regionState=CLOSED 2023-07-16 19:15:15,945 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534915945"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534915945"}]},"ts":"1689534915945"} 2023-07-16 19:15:15,945 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:15,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:15,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5e81655a4881963074c7cd34fd9ea9c1, disabling compactions & flushes 2023-07-16 19:15:15,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:15,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:15,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. after waiting 0 ms 2023-07-16 19:15:15,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:15,947 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=b956b616a12c71076477d953e21a2fc0, regionState=CLOSED 2023-07-16 19:15:15,947 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534915947"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534915947"}]},"ts":"1689534915947"} 2023-07-16 19:15:15,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 19:15:15,956 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227. 2023-07-16 19:15:15,956 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ff104b1b58bb9182d3d487d85ba46227: 2023-07-16 19:15:15,958 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 19:15:15,959 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1. 2023-07-16 19:15:15,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5e81655a4881963074c7cd34fd9ea9c1: 2023-07-16 19:15:15,959 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:15,959 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ec932807612791da91041b18e42327ac 2023-07-16 19:15:15,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ec932807612791da91041b18e42327ac, disabling compactions & flushes 2023-07-16 19:15:15,961 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=50 2023-07-16 19:15:15,961 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:15,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:15,961 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=50, state=SUCCESS; CloseRegionProcedure 69eb2862d0998ef2882d91a5e4c8b894, server=jenkins-hbase4.apache.org,35369,1689534910605 in 169 msec 2023-07-16 19:15:15,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. after waiting 0 ms 2023-07-16 19:15:15,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:15,962 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=48 2023-07-16 19:15:15,962 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=48, state=SUCCESS; CloseRegionProcedure b956b616a12c71076477d953e21a2fc0, server=jenkins-hbase4.apache.org,37881,1689534906681 in 181 msec 2023-07-16 19:15:15,962 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:15,963 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69eb2862d0998ef2882d91a5e4c8b894, UNASSIGN in 202 msec 2023-07-16 19:15:15,966 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=5e81655a4881963074c7cd34fd9ea9c1, regionState=CLOSED 2023-07-16 19:15:15,967 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534915966"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534915966"}]},"ts":"1689534915966"} 2023-07-16 19:15:15,967 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b956b616a12c71076477d953e21a2fc0, UNASSIGN in 203 msec 2023-07-16 19:15:15,967 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=ff104b1b58bb9182d3d487d85ba46227, regionState=CLOSED 2023-07-16 19:15:15,967 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534915967"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534915967"}]},"ts":"1689534915967"} 2023-07-16 19:15:15,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 19:15:15,969 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac. 2023-07-16 19:15:15,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ec932807612791da91041b18e42327ac: 2023-07-16 19:15:15,973 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ec932807612791da91041b18e42327ac 2023-07-16 19:15:15,973 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=52 2023-07-16 19:15:15,973 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; CloseRegionProcedure 5e81655a4881963074c7cd34fd9ea9c1, server=jenkins-hbase4.apache.org,37881,1689534906681 in 189 msec 2023-07-16 19:15:15,974 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=49 2023-07-16 19:15:15,974 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; CloseRegionProcedure ff104b1b58bb9182d3d487d85ba46227, server=jenkins-hbase4.apache.org,35369,1689534910605 in 197 msec 2023-07-16 19:15:15,979 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5e81655a4881963074c7cd34fd9ea9c1, UNASSIGN in 210 msec 2023-07-16 19:15:15,979 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=ec932807612791da91041b18e42327ac, regionState=CLOSED 2023-07-16 19:15:15,980 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689534913341.ec932807612791da91041b18e42327ac.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534915979"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534915979"}]},"ts":"1689534915979"} 2023-07-16 19:15:15,981 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ff104b1b58bb9182d3d487d85ba46227, UNASSIGN in 215 msec 2023-07-16 19:15:15,989 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=51 2023-07-16 19:15:15,989 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=51, state=SUCCESS; CloseRegionProcedure ec932807612791da91041b18e42327ac, server=jenkins-hbase4.apache.org,35369,1689534910605 in 207 msec 2023-07-16 19:15:15,995 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=47 2023-07-16 19:15:15,995 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ec932807612791da91041b18e42327ac, UNASSIGN in 230 msec 2023-07-16 19:15:15,996 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534915996"}]},"ts":"1689534915996"} 2023-07-16 19:15:15,998 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-16 19:15:16,000 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-16 19:15:16,003 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=47, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 260 msec 2023-07-16 19:15:16,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-16 19:15:16,058 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 47 completed 2023-07-16 19:15:16,059 INFO [Listener at localhost/36799] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:16,064 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:16,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=58, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-16 19:15:16,075 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-16 19:15:16,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-16 19:15:16,088 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:16,088 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:16,088 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac 2023-07-16 19:15:16,088 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:16,088 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:16,093 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894/recovered.edits] 2023-07-16 19:15:16,093 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227/recovered.edits] 2023-07-16 19:15:16,093 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1/recovered.edits] 2023-07-16 19:15:16,093 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac/recovered.edits] 2023-07-16 19:15:16,096 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0/recovered.edits] 2023-07-16 19:15:16,109 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227/recovered.edits/7.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227/recovered.edits/7.seqid 2023-07-16 19:15:16,109 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1/recovered.edits/7.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1/recovered.edits/7.seqid 2023-07-16 19:15:16,109 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894/recovered.edits/7.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894/recovered.edits/7.seqid 2023-07-16 19:15:16,110 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac/recovered.edits/7.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac/recovered.edits/7.seqid 2023-07-16 19:15:16,111 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ff104b1b58bb9182d3d487d85ba46227 2023-07-16 19:15:16,111 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5e81655a4881963074c7cd34fd9ea9c1 2023-07-16 19:15:16,111 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69eb2862d0998ef2882d91a5e4c8b894 2023-07-16 19:15:16,111 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ec932807612791da91041b18e42327ac 2023-07-16 19:15:16,112 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0/recovered.edits/7.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0/recovered.edits/7.seqid 2023-07-16 19:15:16,113 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b956b616a12c71076477d953e21a2fc0 2023-07-16 19:15:16,113 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-16 19:15:16,152 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-16 19:15:16,156 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-16 19:15:16,157 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-16 19:15:16,157 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534916157"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:16,157 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534916157"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:16,157 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534916157"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:16,157 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689534913341.ec932807612791da91041b18e42327ac.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534916157"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:16,157 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534916157"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:16,161 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-16 19:15:16,162 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => b956b616a12c71076477d953e21a2fc0, NAME => 'Group_testTableMoveTruncateAndDrop,,1689534913341.b956b616a12c71076477d953e21a2fc0.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => ff104b1b58bb9182d3d487d85ba46227, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689534913341.ff104b1b58bb9182d3d487d85ba46227.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 69eb2862d0998ef2882d91a5e4c8b894, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534913341.69eb2862d0998ef2882d91a5e4c8b894.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => ec932807612791da91041b18e42327ac, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534913341.ec932807612791da91041b18e42327ac.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 5e81655a4881963074c7cd34fd9ea9c1, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689534913341.5e81655a4881963074c7cd34fd9ea9c1.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-16 19:15:16,162 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-16 19:15:16,162 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689534916162"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:16,165 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-16 19:15:16,177 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f2b191e539bc2e49835a12b3a3971233 2023-07-16 19:15:16,177 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/227430229be8501ebec7d29feb7cf229 2023-07-16 19:15:16,177 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69b362909b30579f5bbb12defaa1d535 2023-07-16 19:15:16,177 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/298615affcf14695a008e88d344e9ef1 2023-07-16 19:15:16,177 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88881b9d0fe3706f95c56dbed633d9f0 2023-07-16 19:15:16,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-16 19:15:16,179 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88881b9d0fe3706f95c56dbed633d9f0 empty. 2023-07-16 19:15:16,179 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/298615affcf14695a008e88d344e9ef1 empty. 2023-07-16 19:15:16,179 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/227430229be8501ebec7d29feb7cf229 empty. 2023-07-16 19:15:16,180 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88881b9d0fe3706f95c56dbed633d9f0 2023-07-16 19:15:16,180 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/298615affcf14695a008e88d344e9ef1 2023-07-16 19:15:16,184 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69b362909b30579f5bbb12defaa1d535 empty. 2023-07-16 19:15:16,184 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f2b191e539bc2e49835a12b3a3971233 empty. 2023-07-16 19:15:16,187 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/227430229be8501ebec7d29feb7cf229 2023-07-16 19:15:16,187 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f2b191e539bc2e49835a12b3a3971233 2023-07-16 19:15:16,188 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69b362909b30579f5bbb12defaa1d535 2023-07-16 19:15:16,188 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-16 19:15:16,256 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:16,263 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => f2b191e539bc2e49835a12b3a3971233, NAME => 'Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:16,267 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 88881b9d0fe3706f95c56dbed633d9f0, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:16,275 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 298615affcf14695a008e88d344e9ef1, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:16,368 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:16,368 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing f2b191e539bc2e49835a12b3a3971233, disabling compactions & flushes 2023-07-16 19:15:16,368 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233. 2023-07-16 19:15:16,368 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233. 2023-07-16 19:15:16,368 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233. after waiting 0 ms 2023-07-16 19:15:16,368 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233. 2023-07-16 19:15:16,368 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233. 2023-07-16 19:15:16,368 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for f2b191e539bc2e49835a12b3a3971233: 2023-07-16 19:15:16,369 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 69b362909b30579f5bbb12defaa1d535, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:16,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-16 19:15:16,390 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:16,390 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 298615affcf14695a008e88d344e9ef1, disabling compactions & flushes 2023-07-16 19:15:16,391 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1. 2023-07-16 19:15:16,391 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1. 2023-07-16 19:15:16,391 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1. after waiting 0 ms 2023-07-16 19:15:16,391 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1. 2023-07-16 19:15:16,391 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1. 2023-07-16 19:15:16,391 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 298615affcf14695a008e88d344e9ef1: 2023-07-16 19:15:16,391 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 227430229be8501ebec7d29feb7cf229, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:16,391 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:16,391 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 88881b9d0fe3706f95c56dbed633d9f0, disabling compactions & flushes 2023-07-16 19:15:16,392 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0. 2023-07-16 19:15:16,392 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0. 2023-07-16 19:15:16,392 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0. after waiting 0 ms 2023-07-16 19:15:16,392 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0. 2023-07-16 19:15:16,392 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0. 2023-07-16 19:15:16,392 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 88881b9d0fe3706f95c56dbed633d9f0: 2023-07-16 19:15:16,428 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:16,428 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 227430229be8501ebec7d29feb7cf229, disabling compactions & flushes 2023-07-16 19:15:16,428 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229. 2023-07-16 19:15:16,428 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229. 2023-07-16 19:15:16,428 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229. after waiting 0 ms 2023-07-16 19:15:16,428 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229. 2023-07-16 19:15:16,428 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229. 2023-07-16 19:15:16,428 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 227430229be8501ebec7d29feb7cf229: 2023-07-16 19:15:16,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-16 19:15:16,814 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:16,814 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 69b362909b30579f5bbb12defaa1d535, disabling compactions & flushes 2023-07-16 19:15:16,814 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535. 2023-07-16 19:15:16,814 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535. 2023-07-16 19:15:16,814 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535. after waiting 0 ms 2023-07-16 19:15:16,815 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535. 2023-07-16 19:15:16,815 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535. 2023-07-16 19:15:16,815 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 69b362909b30579f5bbb12defaa1d535: 2023-07-16 19:15:16,820 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534916819"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534916819"}]},"ts":"1689534916819"} 2023-07-16 19:15:16,820 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689534916116.298615affcf14695a008e88d344e9ef1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534916819"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534916819"}]},"ts":"1689534916819"} 2023-07-16 19:15:16,820 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534916819"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534916819"}]},"ts":"1689534916819"} 2023-07-16 19:15:16,820 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534916819"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534916819"}]},"ts":"1689534916819"} 2023-07-16 19:15:16,820 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534916819"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534916819"}]},"ts":"1689534916819"} 2023-07-16 19:15:16,826 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-16 19:15:16,828 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534916828"}]},"ts":"1689534916828"} 2023-07-16 19:15:16,830 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-16 19:15:16,835 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:16,835 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:16,835 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:16,835 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:16,836 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f2b191e539bc2e49835a12b3a3971233, ASSIGN}, {pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88881b9d0fe3706f95c56dbed633d9f0, ASSIGN}, {pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=298615affcf14695a008e88d344e9ef1, ASSIGN}, {pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69b362909b30579f5bbb12defaa1d535, ASSIGN}, {pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=227430229be8501ebec7d29feb7cf229, ASSIGN}] 2023-07-16 19:15:16,837 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=227430229be8501ebec7d29feb7cf229, ASSIGN 2023-07-16 19:15:16,839 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=298615affcf14695a008e88d344e9ef1, ASSIGN 2023-07-16 19:15:16,840 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=298615affcf14695a008e88d344e9ef1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35369,1689534910605; forceNewPlan=false, retain=false 2023-07-16 19:15:16,841 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88881b9d0fe3706f95c56dbed633d9f0, ASSIGN 2023-07-16 19:15:16,842 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88881b9d0fe3706f95c56dbed633d9f0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37881,1689534906681; forceNewPlan=false, retain=false 2023-07-16 19:15:16,843 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f2b191e539bc2e49835a12b3a3971233, ASSIGN 2023-07-16 19:15:16,844 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69b362909b30579f5bbb12defaa1d535, ASSIGN 2023-07-16 19:15:16,845 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f2b191e539bc2e49835a12b3a3971233, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37881,1689534906681; forceNewPlan=false, retain=false 2023-07-16 19:15:16,846 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69b362909b30579f5bbb12defaa1d535, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37881,1689534906681; forceNewPlan=false, retain=false 2023-07-16 19:15:16,846 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=227430229be8501ebec7d29feb7cf229, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35369,1689534910605; forceNewPlan=false, retain=false 2023-07-16 19:15:16,990 INFO [jenkins-hbase4:38143] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-16 19:15:16,993 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=69b362909b30579f5bbb12defaa1d535, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:16,993 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=298615affcf14695a008e88d344e9ef1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:16,993 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=88881b9d0fe3706f95c56dbed633d9f0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:16,994 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534916993"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534916993"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534916993"}]},"ts":"1689534916993"} 2023-07-16 19:15:16,993 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=227430229be8501ebec7d29feb7cf229, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:16,993 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=f2b191e539bc2e49835a12b3a3971233, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:16,994 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534916993"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534916993"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534916993"}]},"ts":"1689534916993"} 2023-07-16 19:15:16,994 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534916993"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534916993"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534916993"}]},"ts":"1689534916993"} 2023-07-16 19:15:16,994 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689534916116.298615affcf14695a008e88d344e9ef1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534916993"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534916993"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534916993"}]},"ts":"1689534916993"} 2023-07-16 19:15:16,994 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534916993"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534916993"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534916993"}]},"ts":"1689534916993"} 2023-07-16 19:15:16,996 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=62, state=RUNNABLE; OpenRegionProcedure 69b362909b30579f5bbb12defaa1d535, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:16,997 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=63, state=RUNNABLE; OpenRegionProcedure 227430229be8501ebec7d29feb7cf229, server=jenkins-hbase4.apache.org,35369,1689534910605}] 2023-07-16 19:15:16,998 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=60, state=RUNNABLE; OpenRegionProcedure 88881b9d0fe3706f95c56dbed633d9f0, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:17,002 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=61, state=RUNNABLE; OpenRegionProcedure 298615affcf14695a008e88d344e9ef1, server=jenkins-hbase4.apache.org,35369,1689534910605}] 2023-07-16 19:15:17,003 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=59, state=RUNNABLE; OpenRegionProcedure f2b191e539bc2e49835a12b3a3971233, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:17,154 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0. 2023-07-16 19:15:17,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 88881b9d0fe3706f95c56dbed633d9f0, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-16 19:15:17,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 88881b9d0fe3706f95c56dbed633d9f0 2023-07-16 19:15:17,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:17,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 88881b9d0fe3706f95c56dbed633d9f0 2023-07-16 19:15:17,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 88881b9d0fe3706f95c56dbed633d9f0 2023-07-16 19:15:17,155 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1. 2023-07-16 19:15:17,156 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 298615affcf14695a008e88d344e9ef1, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-16 19:15:17,156 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 298615affcf14695a008e88d344e9ef1 2023-07-16 19:15:17,156 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:17,156 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 298615affcf14695a008e88d344e9ef1 2023-07-16 19:15:17,156 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 298615affcf14695a008e88d344e9ef1 2023-07-16 19:15:17,156 INFO [StoreOpener-88881b9d0fe3706f95c56dbed633d9f0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 88881b9d0fe3706f95c56dbed633d9f0 2023-07-16 19:15:17,157 INFO [StoreOpener-298615affcf14695a008e88d344e9ef1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 298615affcf14695a008e88d344e9ef1 2023-07-16 19:15:17,158 DEBUG [StoreOpener-88881b9d0fe3706f95c56dbed633d9f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/88881b9d0fe3706f95c56dbed633d9f0/f 2023-07-16 19:15:17,158 DEBUG [StoreOpener-88881b9d0fe3706f95c56dbed633d9f0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/88881b9d0fe3706f95c56dbed633d9f0/f 2023-07-16 19:15:17,159 INFO [StoreOpener-88881b9d0fe3706f95c56dbed633d9f0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 88881b9d0fe3706f95c56dbed633d9f0 columnFamilyName f 2023-07-16 19:15:17,159 DEBUG [StoreOpener-298615affcf14695a008e88d344e9ef1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/298615affcf14695a008e88d344e9ef1/f 2023-07-16 19:15:17,159 DEBUG [StoreOpener-298615affcf14695a008e88d344e9ef1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/298615affcf14695a008e88d344e9ef1/f 2023-07-16 19:15:17,160 INFO [StoreOpener-298615affcf14695a008e88d344e9ef1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 298615affcf14695a008e88d344e9ef1 columnFamilyName f 2023-07-16 19:15:17,160 INFO [StoreOpener-88881b9d0fe3706f95c56dbed633d9f0-1] regionserver.HStore(310): Store=88881b9d0fe3706f95c56dbed633d9f0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:17,161 INFO [StoreOpener-298615affcf14695a008e88d344e9ef1-1] regionserver.HStore(310): Store=298615affcf14695a008e88d344e9ef1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:17,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/88881b9d0fe3706f95c56dbed633d9f0 2023-07-16 19:15:17,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/298615affcf14695a008e88d344e9ef1 2023-07-16 19:15:17,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/88881b9d0fe3706f95c56dbed633d9f0 2023-07-16 19:15:17,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/298615affcf14695a008e88d344e9ef1 2023-07-16 19:15:17,166 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 298615affcf14695a008e88d344e9ef1 2023-07-16 19:15:17,166 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 88881b9d0fe3706f95c56dbed633d9f0 2023-07-16 19:15:17,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/88881b9d0fe3706f95c56dbed633d9f0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:17,173 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 88881b9d0fe3706f95c56dbed633d9f0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11626212320, jitterRate=0.08277539908885956}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:17,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 88881b9d0fe3706f95c56dbed633d9f0: 2023-07-16 19:15:17,174 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0., pid=66, masterSystemTime=1689534917148 2023-07-16 19:15:17,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/298615affcf14695a008e88d344e9ef1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:17,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0. 2023-07-16 19:15:17,177 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 298615affcf14695a008e88d344e9ef1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9514052320, jitterRate=-0.11393482983112335}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:17,177 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0. 2023-07-16 19:15:17,177 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535. 2023-07-16 19:15:17,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 298615affcf14695a008e88d344e9ef1: 2023-07-16 19:15:17,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 69b362909b30579f5bbb12defaa1d535, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-16 19:15:17,177 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=88881b9d0fe3706f95c56dbed633d9f0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:17,178 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534917177"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534917177"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534917177"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534917177"}]},"ts":"1689534917177"} 2023-07-16 19:15:17,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 69b362909b30579f5bbb12defaa1d535 2023-07-16 19:15:17,178 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1., pid=67, masterSystemTime=1689534917150 2023-07-16 19:15:17,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:17,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 69b362909b30579f5bbb12defaa1d535 2023-07-16 19:15:17,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 69b362909b30579f5bbb12defaa1d535 2023-07-16 19:15:17,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1. 2023-07-16 19:15:17,180 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1. 2023-07-16 19:15:17,180 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229. 2023-07-16 19:15:17,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 227430229be8501ebec7d29feb7cf229, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-16 19:15:17,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 227430229be8501ebec7d29feb7cf229 2023-07-16 19:15:17,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:17,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 227430229be8501ebec7d29feb7cf229 2023-07-16 19:15:17,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 227430229be8501ebec7d29feb7cf229 2023-07-16 19:15:17,181 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=298615affcf14695a008e88d344e9ef1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:17,181 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689534916116.298615affcf14695a008e88d344e9ef1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534917181"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534917181"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534917181"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534917181"}]},"ts":"1689534917181"} 2023-07-16 19:15:17,183 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=60 2023-07-16 19:15:17,183 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=60, state=SUCCESS; OpenRegionProcedure 88881b9d0fe3706f95c56dbed633d9f0, server=jenkins-hbase4.apache.org,37881,1689534906681 in 182 msec 2023-07-16 19:15:17,185 INFO [StoreOpener-69b362909b30579f5bbb12defaa1d535-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 69b362909b30579f5bbb12defaa1d535 2023-07-16 19:15:17,185 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88881b9d0fe3706f95c56dbed633d9f0, ASSIGN in 347 msec 2023-07-16 19:15:17,185 INFO [StoreOpener-227430229be8501ebec7d29feb7cf229-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 227430229be8501ebec7d29feb7cf229 2023-07-16 19:15:17,186 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=61 2023-07-16 19:15:17,186 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=61, state=SUCCESS; OpenRegionProcedure 298615affcf14695a008e88d344e9ef1, server=jenkins-hbase4.apache.org,35369,1689534910605 in 183 msec 2023-07-16 19:15:17,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-16 19:15:17,187 DEBUG [StoreOpener-69b362909b30579f5bbb12defaa1d535-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69b362909b30579f5bbb12defaa1d535/f 2023-07-16 19:15:17,187 DEBUG [StoreOpener-227430229be8501ebec7d29feb7cf229-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/227430229be8501ebec7d29feb7cf229/f 2023-07-16 19:15:17,187 DEBUG [StoreOpener-227430229be8501ebec7d29feb7cf229-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/227430229be8501ebec7d29feb7cf229/f 2023-07-16 19:15:17,187 DEBUG [StoreOpener-69b362909b30579f5bbb12defaa1d535-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69b362909b30579f5bbb12defaa1d535/f 2023-07-16 19:15:17,188 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=298615affcf14695a008e88d344e9ef1, ASSIGN in 350 msec 2023-07-16 19:15:17,188 INFO [StoreOpener-227430229be8501ebec7d29feb7cf229-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 227430229be8501ebec7d29feb7cf229 columnFamilyName f 2023-07-16 19:15:17,188 INFO [StoreOpener-69b362909b30579f5bbb12defaa1d535-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 69b362909b30579f5bbb12defaa1d535 columnFamilyName f 2023-07-16 19:15:17,188 INFO [StoreOpener-227430229be8501ebec7d29feb7cf229-1] regionserver.HStore(310): Store=227430229be8501ebec7d29feb7cf229/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:17,189 INFO [StoreOpener-69b362909b30579f5bbb12defaa1d535-1] regionserver.HStore(310): Store=69b362909b30579f5bbb12defaa1d535/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:17,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69b362909b30579f5bbb12defaa1d535 2023-07-16 19:15:17,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/227430229be8501ebec7d29feb7cf229 2023-07-16 19:15:17,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69b362909b30579f5bbb12defaa1d535 2023-07-16 19:15:17,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/227430229be8501ebec7d29feb7cf229 2023-07-16 19:15:17,194 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 227430229be8501ebec7d29feb7cf229 2023-07-16 19:15:17,194 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 69b362909b30579f5bbb12defaa1d535 2023-07-16 19:15:17,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69b362909b30579f5bbb12defaa1d535/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:17,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/227430229be8501ebec7d29feb7cf229/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:17,197 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 69b362909b30579f5bbb12defaa1d535; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11979218720, jitterRate=0.1156516820192337}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:17,197 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 227430229be8501ebec7d29feb7cf229; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10688155360, jitterRate=-0.004587963223457336}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:17,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 69b362909b30579f5bbb12defaa1d535: 2023-07-16 19:15:17,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 227430229be8501ebec7d29feb7cf229: 2023-07-16 19:15:17,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535., pid=64, masterSystemTime=1689534917148 2023-07-16 19:15:17,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229., pid=65, masterSystemTime=1689534917150 2023-07-16 19:15:17,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535. 2023-07-16 19:15:17,200 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535. 2023-07-16 19:15:17,200 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233. 2023-07-16 19:15:17,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f2b191e539bc2e49835a12b3a3971233, NAME => 'Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-16 19:15:17,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f2b191e539bc2e49835a12b3a3971233 2023-07-16 19:15:17,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:17,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f2b191e539bc2e49835a12b3a3971233 2023-07-16 19:15:17,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f2b191e539bc2e49835a12b3a3971233 2023-07-16 19:15:17,201 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=69b362909b30579f5bbb12defaa1d535, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:17,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229. 2023-07-16 19:15:17,202 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229. 2023-07-16 19:15:17,202 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534917201"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534917201"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534917201"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534917201"}]},"ts":"1689534917201"} 2023-07-16 19:15:17,202 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=227430229be8501ebec7d29feb7cf229, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:17,203 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534917202"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534917202"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534917202"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534917202"}]},"ts":"1689534917202"} 2023-07-16 19:15:17,203 INFO [StoreOpener-f2b191e539bc2e49835a12b3a3971233-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f2b191e539bc2e49835a12b3a3971233 2023-07-16 19:15:17,205 DEBUG [StoreOpener-f2b191e539bc2e49835a12b3a3971233-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/f2b191e539bc2e49835a12b3a3971233/f 2023-07-16 19:15:17,205 DEBUG [StoreOpener-f2b191e539bc2e49835a12b3a3971233-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/f2b191e539bc2e49835a12b3a3971233/f 2023-07-16 19:15:17,206 INFO [StoreOpener-f2b191e539bc2e49835a12b3a3971233-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f2b191e539bc2e49835a12b3a3971233 columnFamilyName f 2023-07-16 19:15:17,206 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=62 2023-07-16 19:15:17,206 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=62, state=SUCCESS; OpenRegionProcedure 69b362909b30579f5bbb12defaa1d535, server=jenkins-hbase4.apache.org,37881,1689534906681 in 208 msec 2023-07-16 19:15:17,206 INFO [StoreOpener-f2b191e539bc2e49835a12b3a3971233-1] regionserver.HStore(310): Store=f2b191e539bc2e49835a12b3a3971233/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:17,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/f2b191e539bc2e49835a12b3a3971233 2023-07-16 19:15:17,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/f2b191e539bc2e49835a12b3a3971233 2023-07-16 19:15:17,208 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=63 2023-07-16 19:15:17,208 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69b362909b30579f5bbb12defaa1d535, ASSIGN in 370 msec 2023-07-16 19:15:17,208 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=63, state=SUCCESS; OpenRegionProcedure 227430229be8501ebec7d29feb7cf229, server=jenkins-hbase4.apache.org,35369,1689534910605 in 208 msec 2023-07-16 19:15:17,211 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f2b191e539bc2e49835a12b3a3971233 2023-07-16 19:15:17,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/f2b191e539bc2e49835a12b3a3971233/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:17,214 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=227430229be8501ebec7d29feb7cf229, ASSIGN in 372 msec 2023-07-16 19:15:17,215 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f2b191e539bc2e49835a12b3a3971233; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9784944160, jitterRate=-0.08870606124401093}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:17,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f2b191e539bc2e49835a12b3a3971233: 2023-07-16 19:15:17,216 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233., pid=68, masterSystemTime=1689534917148 2023-07-16 19:15:17,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233. 2023-07-16 19:15:17,218 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233. 2023-07-16 19:15:17,218 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=f2b191e539bc2e49835a12b3a3971233, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:17,218 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534917218"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534917218"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534917218"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534917218"}]},"ts":"1689534917218"} 2023-07-16 19:15:17,223 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=59 2023-07-16 19:15:17,223 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=59, state=SUCCESS; OpenRegionProcedure f2b191e539bc2e49835a12b3a3971233, server=jenkins-hbase4.apache.org,37881,1689534906681 in 217 msec 2023-07-16 19:15:17,228 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=58 2023-07-16 19:15:17,228 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f2b191e539bc2e49835a12b3a3971233, ASSIGN in 387 msec 2023-07-16 19:15:17,228 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534917228"}]},"ts":"1689534917228"} 2023-07-16 19:15:17,230 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-16 19:15:17,236 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-16 19:15:17,239 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=58, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 1.1700 sec 2023-07-16 19:15:18,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-16 19:15:18,189 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 58 completed 2023-07-16 19:15:18,190 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:18,190 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:18,192 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:18,192 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:18,193 INFO [Listener at localhost/36799] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:18,193 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:18,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=69, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:18,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-16 19:15:18,198 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534918198"}]},"ts":"1689534918198"} 2023-07-16 19:15:18,199 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-16 19:15:18,201 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-16 19:15:18,202 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f2b191e539bc2e49835a12b3a3971233, UNASSIGN}, {pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88881b9d0fe3706f95c56dbed633d9f0, UNASSIGN}, {pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=298615affcf14695a008e88d344e9ef1, UNASSIGN}, {pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69b362909b30579f5bbb12defaa1d535, UNASSIGN}, {pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=227430229be8501ebec7d29feb7cf229, UNASSIGN}] 2023-07-16 19:15:18,212 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88881b9d0fe3706f95c56dbed633d9f0, UNASSIGN 2023-07-16 19:15:18,212 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=298615affcf14695a008e88d344e9ef1, UNASSIGN 2023-07-16 19:15:18,213 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f2b191e539bc2e49835a12b3a3971233, UNASSIGN 2023-07-16 19:15:18,213 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69b362909b30579f5bbb12defaa1d535, UNASSIGN 2023-07-16 19:15:18,213 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=227430229be8501ebec7d29feb7cf229, UNASSIGN 2023-07-16 19:15:18,214 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=88881b9d0fe3706f95c56dbed633d9f0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:18,214 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534918214"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534918214"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534918214"}]},"ts":"1689534918214"} 2023-07-16 19:15:18,215 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=298615affcf14695a008e88d344e9ef1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:18,215 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=69b362909b30579f5bbb12defaa1d535, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:18,215 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=227430229be8501ebec7d29feb7cf229, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:18,215 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534918214"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534918214"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534918214"}]},"ts":"1689534918214"} 2023-07-16 19:15:18,215 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534918214"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534918214"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534918214"}]},"ts":"1689534918214"} 2023-07-16 19:15:18,215 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689534916116.298615affcf14695a008e88d344e9ef1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534918214"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534918214"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534918214"}]},"ts":"1689534918214"} 2023-07-16 19:15:18,215 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=f2b191e539bc2e49835a12b3a3971233, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:18,216 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534918215"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534918215"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534918215"}]},"ts":"1689534918215"} 2023-07-16 19:15:18,216 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=71, state=RUNNABLE; CloseRegionProcedure 88881b9d0fe3706f95c56dbed633d9f0, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:18,220 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=73, state=RUNNABLE; CloseRegionProcedure 69b362909b30579f5bbb12defaa1d535, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:18,221 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=74, state=RUNNABLE; CloseRegionProcedure 227430229be8501ebec7d29feb7cf229, server=jenkins-hbase4.apache.org,35369,1689534910605}] 2023-07-16 19:15:18,222 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=72, state=RUNNABLE; CloseRegionProcedure 298615affcf14695a008e88d344e9ef1, server=jenkins-hbase4.apache.org,35369,1689534910605}] 2023-07-16 19:15:18,224 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=70, state=RUNNABLE; CloseRegionProcedure f2b191e539bc2e49835a12b3a3971233, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:18,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-16 19:15:18,370 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f2b191e539bc2e49835a12b3a3971233 2023-07-16 19:15:18,371 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f2b191e539bc2e49835a12b3a3971233, disabling compactions & flushes 2023-07-16 19:15:18,372 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233. 2023-07-16 19:15:18,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233. 2023-07-16 19:15:18,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233. after waiting 0 ms 2023-07-16 19:15:18,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233. 2023-07-16 19:15:18,373 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 227430229be8501ebec7d29feb7cf229 2023-07-16 19:15:18,374 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 227430229be8501ebec7d29feb7cf229, disabling compactions & flushes 2023-07-16 19:15:18,375 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229. 2023-07-16 19:15:18,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229. 2023-07-16 19:15:18,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229. after waiting 0 ms 2023-07-16 19:15:18,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229. 2023-07-16 19:15:18,378 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/f2b191e539bc2e49835a12b3a3971233/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:18,379 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233. 2023-07-16 19:15:18,379 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f2b191e539bc2e49835a12b3a3971233: 2023-07-16 19:15:18,379 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/227430229be8501ebec7d29feb7cf229/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:18,380 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229. 2023-07-16 19:15:18,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 227430229be8501ebec7d29feb7cf229: 2023-07-16 19:15:18,380 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f2b191e539bc2e49835a12b3a3971233 2023-07-16 19:15:18,381 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 88881b9d0fe3706f95c56dbed633d9f0 2023-07-16 19:15:18,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 88881b9d0fe3706f95c56dbed633d9f0, disabling compactions & flushes 2023-07-16 19:15:18,382 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0. 2023-07-16 19:15:18,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0. 2023-07-16 19:15:18,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0. after waiting 0 ms 2023-07-16 19:15:18,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0. 2023-07-16 19:15:18,382 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=f2b191e539bc2e49835a12b3a3971233, regionState=CLOSED 2023-07-16 19:15:18,382 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534918382"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534918382"}]},"ts":"1689534918382"} 2023-07-16 19:15:18,382 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 227430229be8501ebec7d29feb7cf229 2023-07-16 19:15:18,383 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 298615affcf14695a008e88d344e9ef1 2023-07-16 19:15:18,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 298615affcf14695a008e88d344e9ef1, disabling compactions & flushes 2023-07-16 19:15:18,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1. 2023-07-16 19:15:18,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1. 2023-07-16 19:15:18,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1. after waiting 0 ms 2023-07-16 19:15:18,384 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1. 2023-07-16 19:15:18,384 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=227430229be8501ebec7d29feb7cf229, regionState=CLOSED 2023-07-16 19:15:18,384 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689534918384"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534918384"}]},"ts":"1689534918384"} 2023-07-16 19:15:18,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/88881b9d0fe3706f95c56dbed633d9f0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:18,389 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=70 2023-07-16 19:15:18,390 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=70, state=SUCCESS; CloseRegionProcedure f2b191e539bc2e49835a12b3a3971233, server=jenkins-hbase4.apache.org,37881,1689534906681 in 161 msec 2023-07-16 19:15:18,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0. 2023-07-16 19:15:18,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 88881b9d0fe3706f95c56dbed633d9f0: 2023-07-16 19:15:18,391 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=74 2023-07-16 19:15:18,391 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=74, state=SUCCESS; CloseRegionProcedure 227430229be8501ebec7d29feb7cf229, server=jenkins-hbase4.apache.org,35369,1689534910605 in 166 msec 2023-07-16 19:15:18,392 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f2b191e539bc2e49835a12b3a3971233, UNASSIGN in 188 msec 2023-07-16 19:15:18,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 88881b9d0fe3706f95c56dbed633d9f0 2023-07-16 19:15:18,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 69b362909b30579f5bbb12defaa1d535 2023-07-16 19:15:18,393 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=227430229be8501ebec7d29feb7cf229, UNASSIGN in 189 msec 2023-07-16 19:15:18,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/298615affcf14695a008e88d344e9ef1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:18,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 69b362909b30579f5bbb12defaa1d535, disabling compactions & flushes 2023-07-16 19:15:18,395 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535. 2023-07-16 19:15:18,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535. 2023-07-16 19:15:18,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535. after waiting 0 ms 2023-07-16 19:15:18,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535. 2023-07-16 19:15:18,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testTableMoveTruncateAndDrop/69b362909b30579f5bbb12defaa1d535/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:18,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535. 2023-07-16 19:15:18,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 69b362909b30579f5bbb12defaa1d535: 2023-07-16 19:15:18,402 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1. 2023-07-16 19:15:18,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 298615affcf14695a008e88d344e9ef1: 2023-07-16 19:15:18,402 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=88881b9d0fe3706f95c56dbed633d9f0, regionState=CLOSED 2023-07-16 19:15:18,402 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534918402"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534918402"}]},"ts":"1689534918402"} 2023-07-16 19:15:18,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 69b362909b30579f5bbb12defaa1d535 2023-07-16 19:15:18,407 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=69b362909b30579f5bbb12defaa1d535, regionState=CLOSED 2023-07-16 19:15:18,407 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534918407"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534918407"}]},"ts":"1689534918407"} 2023-07-16 19:15:18,408 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 298615affcf14695a008e88d344e9ef1 2023-07-16 19:15:18,408 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=298615affcf14695a008e88d344e9ef1, regionState=CLOSED 2023-07-16 19:15:18,408 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689534916116.298615affcf14695a008e88d344e9ef1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689534918408"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534918408"}]},"ts":"1689534918408"} 2023-07-16 19:15:18,409 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=71 2023-07-16 19:15:18,409 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=71, state=SUCCESS; CloseRegionProcedure 88881b9d0fe3706f95c56dbed633d9f0, server=jenkins-hbase4.apache.org,37881,1689534906681 in 188 msec 2023-07-16 19:15:18,411 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=88881b9d0fe3706f95c56dbed633d9f0, UNASSIGN in 207 msec 2023-07-16 19:15:18,412 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=73 2023-07-16 19:15:18,412 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=73, state=SUCCESS; CloseRegionProcedure 69b362909b30579f5bbb12defaa1d535, server=jenkins-hbase4.apache.org,37881,1689534906681 in 189 msec 2023-07-16 19:15:18,413 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=72 2023-07-16 19:15:18,413 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=72, state=SUCCESS; CloseRegionProcedure 298615affcf14695a008e88d344e9ef1, server=jenkins-hbase4.apache.org,35369,1689534910605 in 188 msec 2023-07-16 19:15:18,413 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=69b362909b30579f5bbb12defaa1d535, UNASSIGN in 210 msec 2023-07-16 19:15:18,415 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=69 2023-07-16 19:15:18,415 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=298615affcf14695a008e88d344e9ef1, UNASSIGN in 211 msec 2023-07-16 19:15:18,415 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534918415"}]},"ts":"1689534918415"} 2023-07-16 19:15:18,417 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-16 19:15:18,419 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-16 19:15:18,421 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=69, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 227 msec 2023-07-16 19:15:18,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-16 19:15:18,501 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 69 completed 2023-07-16 19:15:18,507 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:18,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:18,522 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:18,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_716561459' 2023-07-16 19:15:18,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:18,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:18,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:18,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:18,530 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=80, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:18,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-16 19:15:18,550 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f2b191e539bc2e49835a12b3a3971233 2023-07-16 19:15:18,551 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88881b9d0fe3706f95c56dbed633d9f0 2023-07-16 19:15:18,551 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69b362909b30579f5bbb12defaa1d535 2023-07-16 19:15:18,551 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/298615affcf14695a008e88d344e9ef1 2023-07-16 19:15:18,551 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/227430229be8501ebec7d29feb7cf229 2023-07-16 19:15:18,555 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/298615affcf14695a008e88d344e9ef1/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/298615affcf14695a008e88d344e9ef1/recovered.edits] 2023-07-16 19:15:18,556 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/227430229be8501ebec7d29feb7cf229/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/227430229be8501ebec7d29feb7cf229/recovered.edits] 2023-07-16 19:15:18,557 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88881b9d0fe3706f95c56dbed633d9f0/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88881b9d0fe3706f95c56dbed633d9f0/recovered.edits] 2023-07-16 19:15:18,558 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69b362909b30579f5bbb12defaa1d535/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69b362909b30579f5bbb12defaa1d535/recovered.edits] 2023-07-16 19:15:18,558 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f2b191e539bc2e49835a12b3a3971233/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f2b191e539bc2e49835a12b3a3971233/recovered.edits] 2023-07-16 19:15:18,576 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/298615affcf14695a008e88d344e9ef1/recovered.edits/4.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/Group_testTableMoveTruncateAndDrop/298615affcf14695a008e88d344e9ef1/recovered.edits/4.seqid 2023-07-16 19:15:18,577 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/298615affcf14695a008e88d344e9ef1 2023-07-16 19:15:18,578 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88881b9d0fe3706f95c56dbed633d9f0/recovered.edits/4.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/Group_testTableMoveTruncateAndDrop/88881b9d0fe3706f95c56dbed633d9f0/recovered.edits/4.seqid 2023-07-16 19:15:18,578 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f2b191e539bc2e49835a12b3a3971233/recovered.edits/4.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/Group_testTableMoveTruncateAndDrop/f2b191e539bc2e49835a12b3a3971233/recovered.edits/4.seqid 2023-07-16 19:15:18,580 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/227430229be8501ebec7d29feb7cf229/recovered.edits/4.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/Group_testTableMoveTruncateAndDrop/227430229be8501ebec7d29feb7cf229/recovered.edits/4.seqid 2023-07-16 19:15:18,580 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f2b191e539bc2e49835a12b3a3971233 2023-07-16 19:15:18,580 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/88881b9d0fe3706f95c56dbed633d9f0 2023-07-16 19:15:18,580 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69b362909b30579f5bbb12defaa1d535/recovered.edits/4.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/Group_testTableMoveTruncateAndDrop/69b362909b30579f5bbb12defaa1d535/recovered.edits/4.seqid 2023-07-16 19:15:18,581 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/227430229be8501ebec7d29feb7cf229 2023-07-16 19:15:18,582 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testTableMoveTruncateAndDrop/69b362909b30579f5bbb12defaa1d535 2023-07-16 19:15:18,582 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-16 19:15:18,586 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=80, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:18,594 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-16 19:15:18,597 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-16 19:15:18,599 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=80, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:18,599 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-16 19:15:18,599 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534918599"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:18,600 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534918599"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:18,600 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689534916116.298615affcf14695a008e88d344e9ef1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534918599"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:18,600 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534918599"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:18,600 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534918599"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:18,603 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-16 19:15:18,603 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f2b191e539bc2e49835a12b3a3971233, NAME => 'Group_testTableMoveTruncateAndDrop,,1689534916115.f2b191e539bc2e49835a12b3a3971233.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 88881b9d0fe3706f95c56dbed633d9f0, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689534916115.88881b9d0fe3706f95c56dbed633d9f0.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 298615affcf14695a008e88d344e9ef1, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689534916116.298615affcf14695a008e88d344e9ef1.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 69b362909b30579f5bbb12defaa1d535, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689534916116.69b362909b30579f5bbb12defaa1d535.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 227430229be8501ebec7d29feb7cf229, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689534916116.227430229be8501ebec7d29feb7cf229.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-16 19:15:18,603 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-16 19:15:18,603 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689534918603"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:18,605 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-16 19:15:18,608 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=80, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 19:15:18,609 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=80, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 99 msec 2023-07-16 19:15:18,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-16 19:15:18,646 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 80 completed 2023-07-16 19:15:18,647 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:18,648 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:18,651 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37881] ipc.CallRunner(144): callId: 164 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:35330 deadline: 1689534978651, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46561 startCode=1689534906430. As of locationSeqNum=6. 2023-07-16 19:15:18,763 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:18,763 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:18,765 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:18,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:18,765 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:18,766 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881] to rsgroup default 2023-07-16 19:15:18,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:18,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:18,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:18,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:18,779 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_716561459, current retry=0 2023-07-16 19:15:18,779 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35369,1689534910605, jenkins-hbase4.apache.org,37881,1689534906681] are moved back to Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:18,779 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_716561459 => default 2023-07-16 19:15:18,779 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:18,787 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_716561459 2023-07-16 19:15:18,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:18,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:18,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 19:15:18,795 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:18,797 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:18,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:18,797 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:18,798 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:18,798 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:18,799 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:18,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:18,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:18,805 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:18,810 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:18,811 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:18,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:18,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:18,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:18,819 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:18,822 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:18,822 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:18,825 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:18,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:18,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 149 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41906 deadline: 1689536118825, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:18,826 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:18,828 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:18,828 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:18,829 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:18,829 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:18,830 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:18,830 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:18,859 INFO [Listener at localhost/36799] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=505 (was 417) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35369 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x18bd1b24-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049-prefix:jenkins-hbase4.apache.org,46561,1689534906430.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1100851766_17 at /127.0.0.1:38518 [Receiving block BP-1873196108-172.31.14.131-1689534900365:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1100851766_17 at /127.0.0.1:35790 [Receiving block BP-1873196108-172.31.14.131-1689534900365:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp391494918-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1873196108-172.31.14.131-1689534900365:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/cluster_3dc61365-b8a9-88a9-a1c3-d2e9b69c46bf/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35369 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50949@0x170d7385-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-78c1ca58-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35369 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp391494918-635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RSProcedureDispatcher-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18bd1b24-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1873196108-172.31.14.131-1689534900365:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp391494918-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35369 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x18bd1b24-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp391494918-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:35369-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049-prefix:jenkins-hbase4.apache.org,35369,1689534910605 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1873196108-172.31.14.131-1689534900365:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp391494918-634 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18bd1b24-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1511598895_17 at /127.0.0.1:35974 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp391494918-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18bd1b24-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35369 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1100851766_17 at /127.0.0.1:34006 [Receiving block BP-1873196108-172.31.14.131-1689534900365:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35369 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp391494918-632 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-941949956_17 at /127.0.0.1:35826 [Receiving block BP-1873196108-172.31.14.131-1689534900365:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18bd1b24-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35369 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-941949956_17 at /127.0.0.1:38534 [Receiving block BP-1873196108-172.31.14.131-1689534900365:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1873196108-172.31.14.131-1689534900365:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:35369 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35369 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50949@0x170d7385-SendThread(127.0.0.1:50949) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1873196108-172.31.14.131-1689534900365:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35369 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:34211 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50949@0x170d7385 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1425772335.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1873196108-172.31.14.131-1689534900365:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1100851766_17 at /127.0.0.1:38632 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35369 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:34211 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-941949956_17 at /127.0.0.1:34032 [Receiving block BP-1873196108-172.31.14.131-1689534900365:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-4-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_832793682_17 at /127.0.0.1:34220 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/cluster_3dc61365-b8a9-88a9-a1c3-d2e9b69c46bf/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:35369Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp391494918-633-acceptor-0@407fbd16-ServerConnector@6626147f{HTTP/1.1, (http/1.1)}{0.0.0.0:34919} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=818 (was 672) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=381 (was 379) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=3030 (was 3524) 2023-07-16 19:15:18,860 WARN [Listener at localhost/36799] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-16 19:15:18,878 INFO [Listener at localhost/36799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=505, OpenFileDescriptor=818, MaxFileDescriptor=60000, SystemLoadAverage=381, ProcessCount=172, AvailableMemoryMB=3029 2023-07-16 19:15:18,879 WARN [Listener at localhost/36799] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-16 19:15:18,879 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-16 19:15:18,884 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:18,884 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:18,885 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:18,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:18,886 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:18,887 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:18,887 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:18,888 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:18,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:18,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:18,894 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:18,898 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:18,899 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:18,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:18,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:18,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:18,909 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:18,913 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:18,913 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:18,915 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:18,915 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:18,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 177 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41906 deadline: 1689536118915, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:18,916 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:18,918 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:18,919 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:18,919 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:18,920 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:18,921 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:18,921 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:18,922 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-16 19:15:18,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:18,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:41906 deadline: 1689536118922, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-16 19:15:18,923 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-16 19:15:18,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:18,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:41906 deadline: 1689536118923, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-16 19:15:18,924 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-16 19:15:18,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:18,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 187 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:41906 deadline: 1689536118924, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-16 19:15:18,925 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-16 19:15:18,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-16 19:15:18,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:18,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:18,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:18,932 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:18,935 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:18,935 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:18,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:18,941 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:18,942 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:18,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:18,942 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:18,944 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:18,944 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:18,945 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-16 19:15:18,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:18,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:18,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 19:15:18,950 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:18,951 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:18,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:18,951 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:18,952 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:18,952 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:18,953 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:18,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:18,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:18,967 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:18,970 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:18,971 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:18,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:18,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:18,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:18,977 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:18,981 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:18,981 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:18,983 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:18,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:18,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 221 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41906 deadline: 1689536118983, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:18,984 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:18,986 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:18,987 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:18,987 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:18,987 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:18,988 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:18,988 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:19,009 INFO [Listener at localhost/36799] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=508 (was 505) Potentially hanging thread: hconnection-0x62be270e-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=818 (was 818), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=391 (was 381) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=3027 (was 3029) 2023-07-16 19:15:19,009 WARN [Listener at localhost/36799] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-16 19:15:19,029 INFO [Listener at localhost/36799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=507, OpenFileDescriptor=818, MaxFileDescriptor=60000, SystemLoadAverage=391, ProcessCount=172, AvailableMemoryMB=3025 2023-07-16 19:15:19,029 WARN [Listener at localhost/36799] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-16 19:15:19,029 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-16 19:15:19,035 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:19,036 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:19,037 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:19,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:19,037 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:19,038 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:19,038 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:19,057 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:19,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:19,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:19,067 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:19,076 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:19,077 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:19,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:19,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:19,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:19,085 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:19,095 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:19,095 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:19,098 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:19,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:19,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 249 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41906 deadline: 1689536119098, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:19,099 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:19,101 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:19,101 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:19,102 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:19,102 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:19,103 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:19,103 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:19,104 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:19,104 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:19,105 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:19,105 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:19,106 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-16 19:15:19,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:19,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 19:15:19,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:19,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:19,114 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:19,117 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:19,118 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:19,120 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881] to rsgroup bar 2023-07-16 19:15:19,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:19,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 19:15:19,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:19,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:19,127 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 19:15:19,127 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35369,1689534910605, jenkins-hbase4.apache.org,37881,1689534906681, jenkins-hbase4.apache.org,42201,1689534906603] are moved back to default 2023-07-16 19:15:19,127 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-16 19:15:19,128 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:19,131 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:19,131 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:19,133 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-16 19:15:19,133 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:19,135 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:19,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-16 19:15:19,138 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:19,138 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-16 19:15:19,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-16 19:15:19,141 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:19,141 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 19:15:19,142 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:19,142 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:19,146 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 19:15:19,148 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:19,148 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1 empty. 2023-07-16 19:15:19,149 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:19,149 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-16 19:15:19,177 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:19,178 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5406e9168638aca96291e22e27b0ddd1, NAME => 'Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:19,198 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:19,198 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 5406e9168638aca96291e22e27b0ddd1, disabling compactions & flushes 2023-07-16 19:15:19,198 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:19,198 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:19,198 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. after waiting 0 ms 2023-07-16 19:15:19,198 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:19,198 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:19,198 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 5406e9168638aca96291e22e27b0ddd1: 2023-07-16 19:15:19,201 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 19:15:19,202 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689534919202"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534919202"}]},"ts":"1689534919202"} 2023-07-16 19:15:19,204 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 19:15:19,205 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 19:15:19,205 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534919205"}]},"ts":"1689534919205"} 2023-07-16 19:15:19,207 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-16 19:15:19,215 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5406e9168638aca96291e22e27b0ddd1, ASSIGN}] 2023-07-16 19:15:19,217 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5406e9168638aca96291e22e27b0ddd1, ASSIGN 2023-07-16 19:15:19,218 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5406e9168638aca96291e22e27b0ddd1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46561,1689534906430; forceNewPlan=false, retain=false 2023-07-16 19:15:19,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-16 19:15:19,370 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=5406e9168638aca96291e22e27b0ddd1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:19,370 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689534919370"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534919370"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534919370"}]},"ts":"1689534919370"} 2023-07-16 19:15:19,372 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure 5406e9168638aca96291e22e27b0ddd1, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:19,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-16 19:15:19,510 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 19:15:19,529 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:19,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5406e9168638aca96291e22e27b0ddd1, NAME => 'Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:19,530 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:19,530 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:19,530 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:19,530 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:19,531 INFO [StoreOpener-5406e9168638aca96291e22e27b0ddd1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:19,534 DEBUG [StoreOpener-5406e9168638aca96291e22e27b0ddd1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1/f 2023-07-16 19:15:19,534 DEBUG [StoreOpener-5406e9168638aca96291e22e27b0ddd1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1/f 2023-07-16 19:15:19,535 INFO [StoreOpener-5406e9168638aca96291e22e27b0ddd1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5406e9168638aca96291e22e27b0ddd1 columnFamilyName f 2023-07-16 19:15:19,536 INFO [StoreOpener-5406e9168638aca96291e22e27b0ddd1-1] regionserver.HStore(310): Store=5406e9168638aca96291e22e27b0ddd1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:19,539 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:19,540 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:19,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:19,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:19,549 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5406e9168638aca96291e22e27b0ddd1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10928901600, jitterRate=0.017833277583122253}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:19,549 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5406e9168638aca96291e22e27b0ddd1: 2023-07-16 19:15:19,550 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1., pid=83, masterSystemTime=1689534919524 2023-07-16 19:15:19,552 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:19,552 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:19,553 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=5406e9168638aca96291e22e27b0ddd1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:19,553 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689534919553"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534919553"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534919553"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534919553"}]},"ts":"1689534919553"} 2023-07-16 19:15:19,558 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-16 19:15:19,558 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure 5406e9168638aca96291e22e27b0ddd1, server=jenkins-hbase4.apache.org,46561,1689534906430 in 183 msec 2023-07-16 19:15:19,561 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-16 19:15:19,561 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5406e9168638aca96291e22e27b0ddd1, ASSIGN in 344 msec 2023-07-16 19:15:19,562 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 19:15:19,562 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534919562"}]},"ts":"1689534919562"} 2023-07-16 19:15:19,563 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-16 19:15:19,567 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 19:15:19,569 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 432 msec 2023-07-16 19:15:19,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-16 19:15:19,744 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-16 19:15:19,744 DEBUG [Listener at localhost/36799] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-16 19:15:19,744 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:19,749 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-16 19:15:19,749 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:19,749 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-16 19:15:19,752 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-16 19:15:19,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:19,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 19:15:19,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:19,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:19,757 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-16 19:15:19,757 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(345): Moving region 5406e9168638aca96291e22e27b0ddd1 to RSGroup bar 2023-07-16 19:15:19,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:19,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:19,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:19,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:19,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-16 19:15:19,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:19,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5406e9168638aca96291e22e27b0ddd1, REOPEN/MOVE 2023-07-16 19:15:19,759 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-16 19:15:19,760 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5406e9168638aca96291e22e27b0ddd1, REOPEN/MOVE 2023-07-16 19:15:19,761 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=5406e9168638aca96291e22e27b0ddd1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:19,761 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689534919761"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534919761"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534919761"}]},"ts":"1689534919761"} 2023-07-16 19:15:19,763 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure 5406e9168638aca96291e22e27b0ddd1, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:19,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:19,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5406e9168638aca96291e22e27b0ddd1, disabling compactions & flushes 2023-07-16 19:15:19,917 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:19,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:19,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. after waiting 0 ms 2023-07-16 19:15:19,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:19,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:19,923 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:19,923 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5406e9168638aca96291e22e27b0ddd1: 2023-07-16 19:15:19,923 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5406e9168638aca96291e22e27b0ddd1 move to jenkins-hbase4.apache.org,42201,1689534906603 record at close sequenceid=2 2023-07-16 19:15:19,925 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:19,925 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=5406e9168638aca96291e22e27b0ddd1, regionState=CLOSED 2023-07-16 19:15:19,925 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689534919925"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534919925"}]},"ts":"1689534919925"} 2023-07-16 19:15:19,928 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-16 19:15:19,928 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure 5406e9168638aca96291e22e27b0ddd1, server=jenkins-hbase4.apache.org,46561,1689534906430 in 165 msec 2023-07-16 19:15:19,929 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5406e9168638aca96291e22e27b0ddd1, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42201,1689534906603; forceNewPlan=false, retain=false 2023-07-16 19:15:20,079 INFO [jenkins-hbase4:38143] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 19:15:20,080 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=5406e9168638aca96291e22e27b0ddd1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:20,080 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689534920080"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534920080"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534920080"}]},"ts":"1689534920080"} 2023-07-16 19:15:20,083 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure 5406e9168638aca96291e22e27b0ddd1, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:20,238 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:20,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5406e9168638aca96291e22e27b0ddd1, NAME => 'Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:20,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:20,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:20,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:20,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:20,240 INFO [StoreOpener-5406e9168638aca96291e22e27b0ddd1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:20,241 DEBUG [StoreOpener-5406e9168638aca96291e22e27b0ddd1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1/f 2023-07-16 19:15:20,241 DEBUG [StoreOpener-5406e9168638aca96291e22e27b0ddd1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1/f 2023-07-16 19:15:20,242 INFO [StoreOpener-5406e9168638aca96291e22e27b0ddd1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5406e9168638aca96291e22e27b0ddd1 columnFamilyName f 2023-07-16 19:15:20,242 INFO [StoreOpener-5406e9168638aca96291e22e27b0ddd1-1] regionserver.HStore(310): Store=5406e9168638aca96291e22e27b0ddd1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:20,243 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:20,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:20,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:20,248 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5406e9168638aca96291e22e27b0ddd1; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10471244640, jitterRate=-0.024789348244667053}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:20,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5406e9168638aca96291e22e27b0ddd1: 2023-07-16 19:15:20,249 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1., pid=86, masterSystemTime=1689534920234 2023-07-16 19:15:20,251 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:20,251 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:20,251 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=5406e9168638aca96291e22e27b0ddd1, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:20,251 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689534920251"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534920251"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534920251"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534920251"}]},"ts":"1689534920251"} 2023-07-16 19:15:20,254 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-16 19:15:20,255 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure 5406e9168638aca96291e22e27b0ddd1, server=jenkins-hbase4.apache.org,42201,1689534906603 in 170 msec 2023-07-16 19:15:20,256 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5406e9168638aca96291e22e27b0ddd1, REOPEN/MOVE in 497 msec 2023-07-16 19:15:20,708 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-16 19:15:20,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-16 19:15:20,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-16 19:15:20,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:20,764 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:20,764 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:20,767 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-16 19:15:20,767 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:20,768 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-16 19:15:20,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:20,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:41906 deadline: 1689536120768, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-16 19:15:20,769 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881] to rsgroup default 2023-07-16 19:15:20,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:20,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 289 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:41906 deadline: 1689536120769, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-16 19:15:20,772 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-16 19:15:20,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:20,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 19:15:20,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:20,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:20,777 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-16 19:15:20,777 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(345): Moving region 5406e9168638aca96291e22e27b0ddd1 to RSGroup default 2023-07-16 19:15:20,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5406e9168638aca96291e22e27b0ddd1, REOPEN/MOVE 2023-07-16 19:15:20,778 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-16 19:15:20,779 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5406e9168638aca96291e22e27b0ddd1, REOPEN/MOVE 2023-07-16 19:15:20,780 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=5406e9168638aca96291e22e27b0ddd1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:20,780 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689534920780"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534920780"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534920780"}]},"ts":"1689534920780"} 2023-07-16 19:15:20,784 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 5406e9168638aca96291e22e27b0ddd1, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:20,938 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:20,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5406e9168638aca96291e22e27b0ddd1, disabling compactions & flushes 2023-07-16 19:15:20,941 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:20,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:20,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. after waiting 0 ms 2023-07-16 19:15:20,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:20,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 19:15:20,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:20,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5406e9168638aca96291e22e27b0ddd1: 2023-07-16 19:15:20,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5406e9168638aca96291e22e27b0ddd1 move to jenkins-hbase4.apache.org,46561,1689534906430 record at close sequenceid=5 2023-07-16 19:15:20,949 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:20,950 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=5406e9168638aca96291e22e27b0ddd1, regionState=CLOSED 2023-07-16 19:15:20,950 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689534920949"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534920949"}]},"ts":"1689534920949"} 2023-07-16 19:15:20,953 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-16 19:15:20,954 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 5406e9168638aca96291e22e27b0ddd1, server=jenkins-hbase4.apache.org,42201,1689534906603 in 170 msec 2023-07-16 19:15:20,954 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5406e9168638aca96291e22e27b0ddd1, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46561,1689534906430; forceNewPlan=false, retain=false 2023-07-16 19:15:21,105 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=5406e9168638aca96291e22e27b0ddd1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:21,105 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689534921105"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534921105"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534921105"}]},"ts":"1689534921105"} 2023-07-16 19:15:21,107 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 5406e9168638aca96291e22e27b0ddd1, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:21,263 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:21,263 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5406e9168638aca96291e22e27b0ddd1, NAME => 'Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:21,264 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:21,264 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:21,264 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:21,264 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:21,266 INFO [StoreOpener-5406e9168638aca96291e22e27b0ddd1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:21,267 DEBUG [StoreOpener-5406e9168638aca96291e22e27b0ddd1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1/f 2023-07-16 19:15:21,267 DEBUG [StoreOpener-5406e9168638aca96291e22e27b0ddd1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1/f 2023-07-16 19:15:21,267 INFO [StoreOpener-5406e9168638aca96291e22e27b0ddd1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5406e9168638aca96291e22e27b0ddd1 columnFamilyName f 2023-07-16 19:15:21,268 INFO [StoreOpener-5406e9168638aca96291e22e27b0ddd1-1] regionserver.HStore(310): Store=5406e9168638aca96291e22e27b0ddd1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:21,269 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:21,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:21,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:21,274 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5406e9168638aca96291e22e27b0ddd1; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10667336320, jitterRate=-0.0065268874168396}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:21,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5406e9168638aca96291e22e27b0ddd1: 2023-07-16 19:15:21,275 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1., pid=89, masterSystemTime=1689534921259 2023-07-16 19:15:21,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:21,276 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:21,277 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=5406e9168638aca96291e22e27b0ddd1, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:21,277 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689534921276"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534921276"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534921276"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534921276"}]},"ts":"1689534921276"} 2023-07-16 19:15:21,280 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-16 19:15:21,280 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 5406e9168638aca96291e22e27b0ddd1, server=jenkins-hbase4.apache.org,46561,1689534906430 in 171 msec 2023-07-16 19:15:21,281 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5406e9168638aca96291e22e27b0ddd1, REOPEN/MOVE in 503 msec 2023-07-16 19:15:21,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-16 19:15:21,779 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-16 19:15:21,779 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:21,785 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:21,785 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:21,789 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-16 19:15:21,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:21,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 296 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:41906 deadline: 1689536121789, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-16 19:15:21,791 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881] to rsgroup default 2023-07-16 19:15:21,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:21,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 19:15:21,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:21,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:21,797 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-16 19:15:21,797 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35369,1689534910605, jenkins-hbase4.apache.org,37881,1689534906681, jenkins-hbase4.apache.org,42201,1689534906603] are moved back to bar 2023-07-16 19:15:21,798 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-16 19:15:21,798 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:21,801 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:21,802 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:21,804 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-16 19:15:21,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:21,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:21,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 19:15:21,813 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:21,817 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:21,817 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:21,820 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:21,820 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:21,822 INFO [Listener at localhost/36799] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-16 19:15:21,822 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-16 19:15:21,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-16 19:15:21,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-16 19:15:21,827 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534921827"}]},"ts":"1689534921827"} 2023-07-16 19:15:21,828 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-16 19:15:21,830 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-16 19:15:21,831 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5406e9168638aca96291e22e27b0ddd1, UNASSIGN}] 2023-07-16 19:15:21,833 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5406e9168638aca96291e22e27b0ddd1, UNASSIGN 2023-07-16 19:15:21,833 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=5406e9168638aca96291e22e27b0ddd1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:21,833 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689534921833"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534921833"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534921833"}]},"ts":"1689534921833"} 2023-07-16 19:15:21,835 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure 5406e9168638aca96291e22e27b0ddd1, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:21,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-16 19:15:21,987 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:21,988 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5406e9168638aca96291e22e27b0ddd1, disabling compactions & flushes 2023-07-16 19:15:21,989 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:21,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:21,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. after waiting 0 ms 2023-07-16 19:15:21,989 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:21,993 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-16 19:15:21,994 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1. 2023-07-16 19:15:21,994 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5406e9168638aca96291e22e27b0ddd1: 2023-07-16 19:15:21,995 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:21,996 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=5406e9168638aca96291e22e27b0ddd1, regionState=CLOSED 2023-07-16 19:15:21,996 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689534921996"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534921996"}]},"ts":"1689534921996"} 2023-07-16 19:15:21,999 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-16 19:15:21,999 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure 5406e9168638aca96291e22e27b0ddd1, server=jenkins-hbase4.apache.org,46561,1689534906430 in 162 msec 2023-07-16 19:15:22,001 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-16 19:15:22,001 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=5406e9168638aca96291e22e27b0ddd1, UNASSIGN in 168 msec 2023-07-16 19:15:22,002 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534922002"}]},"ts":"1689534922002"} 2023-07-16 19:15:22,003 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-16 19:15:22,005 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-16 19:15:22,006 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 183 msec 2023-07-16 19:15:22,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-16 19:15:22,129 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-16 19:15:22,130 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-16 19:15:22,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 19:15:22,133 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 19:15:22,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-16 19:15:22,134 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 19:15:22,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:22,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:22,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:22,138 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:22,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-16 19:15:22,146 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1/recovered.edits] 2023-07-16 19:15:22,156 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1/recovered.edits/10.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1/recovered.edits/10.seqid 2023-07-16 19:15:22,157 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testFailRemoveGroup/5406e9168638aca96291e22e27b0ddd1 2023-07-16 19:15:22,157 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-16 19:15:22,160 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 19:15:22,163 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-16 19:15:22,165 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-16 19:15:22,166 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 19:15:22,166 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-16 19:15:22,167 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534922166"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:22,168 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 19:15:22,169 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 5406e9168638aca96291e22e27b0ddd1, NAME => 'Group_testFailRemoveGroup,,1689534919134.5406e9168638aca96291e22e27b0ddd1.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 19:15:22,169 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-16 19:15:22,169 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689534922169"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:22,172 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-16 19:15:22,174 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 19:15:22,176 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 45 msec 2023-07-16 19:15:22,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-16 19:15:22,242 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-16 19:15:22,245 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:22,245 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:22,247 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:22,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:22,247 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:22,248 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:22,248 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:22,249 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:22,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:22,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:22,260 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:22,264 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:22,265 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:22,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:22,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:22,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:22,284 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:22,298 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:22,299 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:22,302 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:22,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:22,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 344 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41906 deadline: 1689536122302, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:22,304 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:22,307 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:22,307 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:22,308 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:22,308 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:22,309 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:22,309 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:22,352 INFO [Listener at localhost/36799] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=510 (was 507) Potentially hanging thread: hconnection-0x62be270e-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xfdeaa0f-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-941949956_17 at /127.0.0.1:35974 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18bd1b24-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-941949956_17 at /127.0.0.1:47048 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18bd1b24-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1146923140_17 at /127.0.0.1:34220 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18bd1b24-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18bd1b24-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18bd1b24-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=818 (was 818), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=391 (was 391), ProcessCount=172 (was 172), AvailableMemoryMB=3027 (was 3025) - AvailableMemoryMB LEAK? - 2023-07-16 19:15:22,352 WARN [Listener at localhost/36799] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-16 19:15:22,374 INFO [Listener at localhost/36799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=510, OpenFileDescriptor=818, MaxFileDescriptor=60000, SystemLoadAverage=391, ProcessCount=172, AvailableMemoryMB=3027 2023-07-16 19:15:22,374 WARN [Listener at localhost/36799] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-16 19:15:22,374 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-16 19:15:22,379 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:22,379 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:22,380 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:22,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:22,380 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:22,381 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:22,381 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:22,382 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:22,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:22,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:22,388 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:22,392 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:22,393 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:22,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:22,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:22,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:22,399 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:22,403 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:22,403 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:22,406 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:22,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:22,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 372 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41906 deadline: 1689536122406, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:22,406 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:22,411 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:22,412 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:22,412 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:22,412 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:22,413 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:22,413 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:22,414 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:22,414 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:22,415 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_1632824276 2023-07-16 19:15:22,417 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:22,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:22,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1632824276 2023-07-16 19:15:22,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:22,421 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:22,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:22,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:22,427 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35369] to rsgroup Group_testMultiTableMove_1632824276 2023-07-16 19:15:22,429 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:22,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1632824276 2023-07-16 19:15:22,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:22,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:22,432 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 19:15:22,432 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35369,1689534910605] are moved back to default 2023-07-16 19:15:22,432 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1632824276 2023-07-16 19:15:22,432 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:22,435 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:22,435 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:22,438 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1632824276 2023-07-16 19:15:22,438 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:22,439 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:22,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 19:15:22,443 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:22,443 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-16 19:15:22,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-16 19:15:22,445 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:22,446 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1632824276 2023-07-16 19:15:22,446 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:22,446 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:22,452 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 19:15:22,454 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:22,455 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960 empty. 2023-07-16 19:15:22,455 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:22,455 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-16 19:15:22,477 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:22,479 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => ba2b706fe6b102ad56c98351c5add960, NAME => 'GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:22,512 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:22,512 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing ba2b706fe6b102ad56c98351c5add960, disabling compactions & flushes 2023-07-16 19:15:22,512 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:22,512 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:22,512 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. after waiting 0 ms 2023-07-16 19:15:22,512 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:22,512 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:22,512 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for ba2b706fe6b102ad56c98351c5add960: 2023-07-16 19:15:22,519 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 19:15:22,520 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534922520"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534922520"}]},"ts":"1689534922520"} 2023-07-16 19:15:22,525 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 19:15:22,527 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 19:15:22,527 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534922527"}]},"ts":"1689534922527"} 2023-07-16 19:15:22,529 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-16 19:15:22,532 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:22,533 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:22,533 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:22,533 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:22,533 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:22,533 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ba2b706fe6b102ad56c98351c5add960, ASSIGN}] 2023-07-16 19:15:22,536 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ba2b706fe6b102ad56c98351c5add960, ASSIGN 2023-07-16 19:15:22,538 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ba2b706fe6b102ad56c98351c5add960, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42201,1689534906603; forceNewPlan=false, retain=false 2023-07-16 19:15:22,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-16 19:15:22,688 INFO [jenkins-hbase4:38143] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 19:15:22,690 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=ba2b706fe6b102ad56c98351c5add960, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:22,690 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534922690"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534922690"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534922690"}]},"ts":"1689534922690"} 2023-07-16 19:15:22,692 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure ba2b706fe6b102ad56c98351c5add960, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:22,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-16 19:15:22,848 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:22,848 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ba2b706fe6b102ad56c98351c5add960, NAME => 'GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:22,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:22,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:22,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:22,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:22,850 INFO [StoreOpener-ba2b706fe6b102ad56c98351c5add960-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:22,852 DEBUG [StoreOpener-ba2b706fe6b102ad56c98351c5add960-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960/f 2023-07-16 19:15:22,852 DEBUG [StoreOpener-ba2b706fe6b102ad56c98351c5add960-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960/f 2023-07-16 19:15:22,852 INFO [StoreOpener-ba2b706fe6b102ad56c98351c5add960-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ba2b706fe6b102ad56c98351c5add960 columnFamilyName f 2023-07-16 19:15:22,854 INFO [StoreOpener-ba2b706fe6b102ad56c98351c5add960-1] regionserver.HStore(310): Store=ba2b706fe6b102ad56c98351c5add960/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:22,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:22,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:22,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:22,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:22,861 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ba2b706fe6b102ad56c98351c5add960; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9810676960, jitterRate=-0.0863095074892044}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:22,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ba2b706fe6b102ad56c98351c5add960: 2023-07-16 19:15:22,862 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960., pid=96, masterSystemTime=1689534922844 2023-07-16 19:15:22,864 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:22,864 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:22,865 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=ba2b706fe6b102ad56c98351c5add960, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:22,865 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534922865"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534922865"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534922865"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534922865"}]},"ts":"1689534922865"} 2023-07-16 19:15:22,870 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-16 19:15:22,870 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure ba2b706fe6b102ad56c98351c5add960, server=jenkins-hbase4.apache.org,42201,1689534906603 in 174 msec 2023-07-16 19:15:22,873 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-16 19:15:22,873 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ba2b706fe6b102ad56c98351c5add960, ASSIGN in 337 msec 2023-07-16 19:15:22,874 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 19:15:22,874 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534922874"}]},"ts":"1689534922874"} 2023-07-16 19:15:22,875 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-16 19:15:22,878 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 19:15:22,879 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 439 msec 2023-07-16 19:15:23,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-16 19:15:23,049 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-16 19:15:23,049 DEBUG [Listener at localhost/36799] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-16 19:15:23,049 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:23,055 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-16 19:15:23,056 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:23,056 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-16 19:15:23,058 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:23,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 19:15:23,063 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:23,064 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-16 19:15:23,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-16 19:15:23,066 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:23,067 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1632824276 2023-07-16 19:15:23,068 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:23,068 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:23,071 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 19:15:23,073 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:23,074 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5 empty. 2023-07-16 19:15:23,074 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:23,074 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-16 19:15:23,111 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:23,115 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3b4ca5b5effeb0bb7f100934d18e0af5, NAME => 'GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:23,152 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:23,152 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 3b4ca5b5effeb0bb7f100934d18e0af5, disabling compactions & flushes 2023-07-16 19:15:23,152 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:23,152 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:23,152 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. after waiting 0 ms 2023-07-16 19:15:23,152 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:23,152 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:23,152 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 3b4ca5b5effeb0bb7f100934d18e0af5: 2023-07-16 19:15:23,155 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 19:15:23,157 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534923156"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534923156"}]},"ts":"1689534923156"} 2023-07-16 19:15:23,158 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 19:15:23,159 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 19:15:23,160 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534923160"}]},"ts":"1689534923160"} 2023-07-16 19:15:23,161 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-16 19:15:23,166 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:23,166 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:23,166 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:23,166 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:23,166 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:23,167 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3b4ca5b5effeb0bb7f100934d18e0af5, ASSIGN}] 2023-07-16 19:15:23,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-16 19:15:23,169 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3b4ca5b5effeb0bb7f100934d18e0af5, ASSIGN 2023-07-16 19:15:23,170 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3b4ca5b5effeb0bb7f100934d18e0af5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37881,1689534906681; forceNewPlan=false, retain=false 2023-07-16 19:15:23,321 INFO [jenkins-hbase4:38143] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 19:15:23,322 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=3b4ca5b5effeb0bb7f100934d18e0af5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:23,322 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534923322"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534923322"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534923322"}]},"ts":"1689534923322"} 2023-07-16 19:15:23,324 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 3b4ca5b5effeb0bb7f100934d18e0af5, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:23,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-16 19:15:23,486 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:23,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3b4ca5b5effeb0bb7f100934d18e0af5, NAME => 'GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:23,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:23,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:23,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:23,487 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:23,495 INFO [StoreOpener-3b4ca5b5effeb0bb7f100934d18e0af5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:23,497 DEBUG [StoreOpener-3b4ca5b5effeb0bb7f100934d18e0af5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5/f 2023-07-16 19:15:23,497 DEBUG [StoreOpener-3b4ca5b5effeb0bb7f100934d18e0af5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5/f 2023-07-16 19:15:23,498 INFO [StoreOpener-3b4ca5b5effeb0bb7f100934d18e0af5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3b4ca5b5effeb0bb7f100934d18e0af5 columnFamilyName f 2023-07-16 19:15:23,498 INFO [StoreOpener-3b4ca5b5effeb0bb7f100934d18e0af5-1] regionserver.HStore(310): Store=3b4ca5b5effeb0bb7f100934d18e0af5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:23,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:23,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:23,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:23,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:23,505 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3b4ca5b5effeb0bb7f100934d18e0af5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9622499200, jitterRate=-0.10383492708206177}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:23,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3b4ca5b5effeb0bb7f100934d18e0af5: 2023-07-16 19:15:23,506 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5., pid=99, masterSystemTime=1689534923481 2023-07-16 19:15:23,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:23,508 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:23,509 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=3b4ca5b5effeb0bb7f100934d18e0af5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:23,509 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534923509"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534923509"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534923509"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534923509"}]},"ts":"1689534923509"} 2023-07-16 19:15:23,514 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-16 19:15:23,514 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 3b4ca5b5effeb0bb7f100934d18e0af5, server=jenkins-hbase4.apache.org,37881,1689534906681 in 188 msec 2023-07-16 19:15:23,516 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-16 19:15:23,516 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3b4ca5b5effeb0bb7f100934d18e0af5, ASSIGN in 348 msec 2023-07-16 19:15:23,517 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 19:15:23,517 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534923517"}]},"ts":"1689534923517"} 2023-07-16 19:15:23,519 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-16 19:15:23,521 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 19:15:23,523 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 464 msec 2023-07-16 19:15:23,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-16 19:15:23,670 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-16 19:15:23,670 DEBUG [Listener at localhost/36799] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-16 19:15:23,670 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:23,674 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-16 19:15:23,674 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:23,674 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-16 19:15:23,675 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:23,686 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-16 19:15:23,687 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:23,688 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-16 19:15:23,688 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:23,688 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1632824276 2023-07-16 19:15:23,691 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1632824276 2023-07-16 19:15:23,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:23,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1632824276 2023-07-16 19:15:23,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:23,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:23,697 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1632824276 2023-07-16 19:15:23,697 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(345): Moving region 3b4ca5b5effeb0bb7f100934d18e0af5 to RSGroup Group_testMultiTableMove_1632824276 2023-07-16 19:15:23,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3b4ca5b5effeb0bb7f100934d18e0af5, REOPEN/MOVE 2023-07-16 19:15:23,698 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1632824276 2023-07-16 19:15:23,699 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(345): Moving region ba2b706fe6b102ad56c98351c5add960 to RSGroup Group_testMultiTableMove_1632824276 2023-07-16 19:15:23,699 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3b4ca5b5effeb0bb7f100934d18e0af5, REOPEN/MOVE 2023-07-16 19:15:23,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ba2b706fe6b102ad56c98351c5add960, REOPEN/MOVE 2023-07-16 19:15:23,700 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1632824276, current retry=0 2023-07-16 19:15:23,700 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=3b4ca5b5effeb0bb7f100934d18e0af5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:23,701 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ba2b706fe6b102ad56c98351c5add960, REOPEN/MOVE 2023-07-16 19:15:23,701 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534923700"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534923700"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534923700"}]},"ts":"1689534923700"} 2023-07-16 19:15:23,701 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=ba2b706fe6b102ad56c98351c5add960, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:23,701 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534923701"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534923701"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534923701"}]},"ts":"1689534923701"} 2023-07-16 19:15:23,702 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure 3b4ca5b5effeb0bb7f100934d18e0af5, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:23,703 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure ba2b706fe6b102ad56c98351c5add960, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:23,856 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:23,856 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:23,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3b4ca5b5effeb0bb7f100934d18e0af5, disabling compactions & flushes 2023-07-16 19:15:23,859 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:23,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:23,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. after waiting 0 ms 2023-07-16 19:15:23,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:23,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ba2b706fe6b102ad56c98351c5add960, disabling compactions & flushes 2023-07-16 19:15:23,876 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:23,876 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:23,876 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. after waiting 0 ms 2023-07-16 19:15:23,876 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:23,895 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:23,896 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:23,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3b4ca5b5effeb0bb7f100934d18e0af5: 2023-07-16 19:15:23,896 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 3b4ca5b5effeb0bb7f100934d18e0af5 move to jenkins-hbase4.apache.org,35369,1689534910605 record at close sequenceid=2 2023-07-16 19:15:23,899 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:23,901 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:23,902 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:23,902 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=3b4ca5b5effeb0bb7f100934d18e0af5, regionState=CLOSED 2023-07-16 19:15:23,902 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534923902"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534923902"}]},"ts":"1689534923902"} 2023-07-16 19:15:23,902 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ba2b706fe6b102ad56c98351c5add960: 2023-07-16 19:15:23,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ba2b706fe6b102ad56c98351c5add960 move to jenkins-hbase4.apache.org,35369,1689534910605 record at close sequenceid=2 2023-07-16 19:15:23,905 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:23,906 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=ba2b706fe6b102ad56c98351c5add960, regionState=CLOSED 2023-07-16 19:15:23,906 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534923906"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534923906"}]},"ts":"1689534923906"} 2023-07-16 19:15:23,912 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-16 19:15:23,912 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure 3b4ca5b5effeb0bb7f100934d18e0af5, server=jenkins-hbase4.apache.org,37881,1689534906681 in 204 msec 2023-07-16 19:15:23,913 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-16 19:15:23,913 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3b4ca5b5effeb0bb7f100934d18e0af5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35369,1689534910605; forceNewPlan=false, retain=false 2023-07-16 19:15:23,913 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure ba2b706fe6b102ad56c98351c5add960, server=jenkins-hbase4.apache.org,42201,1689534906603 in 206 msec 2023-07-16 19:15:23,917 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ba2b706fe6b102ad56c98351c5add960, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,35369,1689534910605; forceNewPlan=false, retain=false 2023-07-16 19:15:24,064 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=3b4ca5b5effeb0bb7f100934d18e0af5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:24,064 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=ba2b706fe6b102ad56c98351c5add960, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:24,064 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534924064"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534924064"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534924064"}]},"ts":"1689534924064"} 2023-07-16 19:15:24,064 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534924064"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534924064"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534924064"}]},"ts":"1689534924064"} 2023-07-16 19:15:24,066 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=100, state=RUNNABLE; OpenRegionProcedure 3b4ca5b5effeb0bb7f100934d18e0af5, server=jenkins-hbase4.apache.org,35369,1689534910605}] 2023-07-16 19:15:24,066 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=101, state=RUNNABLE; OpenRegionProcedure ba2b706fe6b102ad56c98351c5add960, server=jenkins-hbase4.apache.org,35369,1689534910605}] 2023-07-16 19:15:24,220 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:24,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3b4ca5b5effeb0bb7f100934d18e0af5, NAME => 'GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:24,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:24,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:24,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:24,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:24,222 INFO [StoreOpener-3b4ca5b5effeb0bb7f100934d18e0af5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:24,223 DEBUG [StoreOpener-3b4ca5b5effeb0bb7f100934d18e0af5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5/f 2023-07-16 19:15:24,223 DEBUG [StoreOpener-3b4ca5b5effeb0bb7f100934d18e0af5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5/f 2023-07-16 19:15:24,224 INFO [StoreOpener-3b4ca5b5effeb0bb7f100934d18e0af5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3b4ca5b5effeb0bb7f100934d18e0af5 columnFamilyName f 2023-07-16 19:15:24,224 INFO [StoreOpener-3b4ca5b5effeb0bb7f100934d18e0af5-1] regionserver.HStore(310): Store=3b4ca5b5effeb0bb7f100934d18e0af5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:24,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:24,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:24,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:24,466 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3b4ca5b5effeb0bb7f100934d18e0af5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11008536000, jitterRate=0.02524980902671814}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:24,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3b4ca5b5effeb0bb7f100934d18e0af5: 2023-07-16 19:15:24,468 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5., pid=104, masterSystemTime=1689534924217 2023-07-16 19:15:24,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:24,470 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:24,470 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:24,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ba2b706fe6b102ad56c98351c5add960, NAME => 'GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:24,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:24,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:24,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:24,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:24,472 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=3b4ca5b5effeb0bb7f100934d18e0af5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:24,473 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534924472"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534924472"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534924472"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534924472"}]},"ts":"1689534924472"} 2023-07-16 19:15:24,475 INFO [StoreOpener-ba2b706fe6b102ad56c98351c5add960-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:24,477 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=100 2023-07-16 19:15:24,477 DEBUG [StoreOpener-ba2b706fe6b102ad56c98351c5add960-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960/f 2023-07-16 19:15:24,477 DEBUG [StoreOpener-ba2b706fe6b102ad56c98351c5add960-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960/f 2023-07-16 19:15:24,477 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=100, state=SUCCESS; OpenRegionProcedure 3b4ca5b5effeb0bb7f100934d18e0af5, server=jenkins-hbase4.apache.org,35369,1689534910605 in 408 msec 2023-07-16 19:15:24,481 INFO [StoreOpener-ba2b706fe6b102ad56c98351c5add960-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ba2b706fe6b102ad56c98351c5add960 columnFamilyName f 2023-07-16 19:15:24,482 INFO [StoreOpener-ba2b706fe6b102ad56c98351c5add960-1] regionserver.HStore(310): Store=ba2b706fe6b102ad56c98351c5add960/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:24,483 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3b4ca5b5effeb0bb7f100934d18e0af5, REOPEN/MOVE in 780 msec 2023-07-16 19:15:24,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:24,486 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:24,490 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:24,491 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ba2b706fe6b102ad56c98351c5add960; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10782705120, jitterRate=0.004217669367790222}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:24,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ba2b706fe6b102ad56c98351c5add960: 2023-07-16 19:15:24,492 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960., pid=105, masterSystemTime=1689534924217 2023-07-16 19:15:24,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:24,495 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:24,496 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=ba2b706fe6b102ad56c98351c5add960, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:24,496 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534924496"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534924496"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534924496"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534924496"}]},"ts":"1689534924496"} 2023-07-16 19:15:24,500 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=101 2023-07-16 19:15:24,500 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=101, state=SUCCESS; OpenRegionProcedure ba2b706fe6b102ad56c98351c5add960, server=jenkins-hbase4.apache.org,35369,1689534910605 in 432 msec 2023-07-16 19:15:24,501 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ba2b706fe6b102ad56c98351c5add960, REOPEN/MOVE in 801 msec 2023-07-16 19:15:24,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-16 19:15:24,700 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1632824276. 2023-07-16 19:15:24,700 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:24,706 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:24,706 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:24,710 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-16 19:15:24,710 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:24,711 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-16 19:15:24,711 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:24,712 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:24,713 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:24,714 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1632824276 2023-07-16 19:15:24,714 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:24,716 INFO [Listener at localhost/36799] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-16 19:15:24,716 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-16 19:15:24,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 19:15:24,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-16 19:15:24,721 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534924721"}]},"ts":"1689534924721"} 2023-07-16 19:15:24,722 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-16 19:15:24,723 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-16 19:15:24,727 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ba2b706fe6b102ad56c98351c5add960, UNASSIGN}] 2023-07-16 19:15:24,728 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ba2b706fe6b102ad56c98351c5add960, UNASSIGN 2023-07-16 19:15:24,729 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=ba2b706fe6b102ad56c98351c5add960, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:24,729 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534924729"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534924729"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534924729"}]},"ts":"1689534924729"} 2023-07-16 19:15:24,730 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure ba2b706fe6b102ad56c98351c5add960, server=jenkins-hbase4.apache.org,35369,1689534910605}] 2023-07-16 19:15:24,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-16 19:15:24,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:24,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ba2b706fe6b102ad56c98351c5add960, disabling compactions & flushes 2023-07-16 19:15:24,883 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:24,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:24,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. after waiting 0 ms 2023-07-16 19:15:24,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:24,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 19:15:24,888 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960. 2023-07-16 19:15:24,888 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ba2b706fe6b102ad56c98351c5add960: 2023-07-16 19:15:24,889 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:24,890 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=ba2b706fe6b102ad56c98351c5add960, regionState=CLOSED 2023-07-16 19:15:24,890 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534924890"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534924890"}]},"ts":"1689534924890"} 2023-07-16 19:15:24,893 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-16 19:15:24,893 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure ba2b706fe6b102ad56c98351c5add960, server=jenkins-hbase4.apache.org,35369,1689534910605 in 161 msec 2023-07-16 19:15:24,895 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-16 19:15:24,895 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ba2b706fe6b102ad56c98351c5add960, UNASSIGN in 169 msec 2023-07-16 19:15:24,895 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534924895"}]},"ts":"1689534924895"} 2023-07-16 19:15:24,897 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-16 19:15:24,899 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-16 19:15:24,901 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 183 msec 2023-07-16 19:15:25,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-16 19:15:25,024 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-16 19:15:25,025 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-16 19:15:25,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 19:15:25,029 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 19:15:25,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1632824276' 2023-07-16 19:15:25,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1632824276 2023-07-16 19:15:25,032 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 19:15:25,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,034 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 19:15:25,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:25,039 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:25,041 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960/recovered.edits] 2023-07-16 19:15:25,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-16 19:15:25,048 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960/recovered.edits/7.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960/recovered.edits/7.seqid 2023-07-16 19:15:25,049 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveA/ba2b706fe6b102ad56c98351c5add960 2023-07-16 19:15:25,049 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-16 19:15:25,052 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 19:15:25,055 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-16 19:15:25,056 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-16 19:15:25,058 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 19:15:25,058 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-16 19:15:25,058 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534925058"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:25,060 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 19:15:25,060 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ba2b706fe6b102ad56c98351c5add960, NAME => 'GrouptestMultiTableMoveA,,1689534922439.ba2b706fe6b102ad56c98351c5add960.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 19:15:25,060 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-16 19:15:25,060 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689534925060"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:25,062 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-16 19:15:25,064 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 19:15:25,066 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 39 msec 2023-07-16 19:15:25,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-16 19:15:25,145 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-16 19:15:25,146 INFO [Listener at localhost/36799] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-16 19:15:25,147 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-16 19:15:25,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 19:15:25,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-16 19:15:25,159 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534925159"}]},"ts":"1689534925159"} 2023-07-16 19:15:25,161 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-16 19:15:25,163 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-16 19:15:25,164 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3b4ca5b5effeb0bb7f100934d18e0af5, UNASSIGN}] 2023-07-16 19:15:25,166 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3b4ca5b5effeb0bb7f100934d18e0af5, UNASSIGN 2023-07-16 19:15:25,167 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=3b4ca5b5effeb0bb7f100934d18e0af5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:25,167 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534925167"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534925167"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534925167"}]},"ts":"1689534925167"} 2023-07-16 19:15:25,171 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 3b4ca5b5effeb0bb7f100934d18e0af5, server=jenkins-hbase4.apache.org,35369,1689534910605}] 2023-07-16 19:15:25,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-16 19:15:25,323 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:25,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3b4ca5b5effeb0bb7f100934d18e0af5, disabling compactions & flushes 2023-07-16 19:15:25,325 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:25,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:25,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. after waiting 0 ms 2023-07-16 19:15:25,325 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:25,329 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 19:15:25,331 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5. 2023-07-16 19:15:25,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3b4ca5b5effeb0bb7f100934d18e0af5: 2023-07-16 19:15:25,332 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:25,333 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=3b4ca5b5effeb0bb7f100934d18e0af5, regionState=CLOSED 2023-07-16 19:15:25,333 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689534925333"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534925333"}]},"ts":"1689534925333"} 2023-07-16 19:15:25,336 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-16 19:15:25,336 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 3b4ca5b5effeb0bb7f100934d18e0af5, server=jenkins-hbase4.apache.org,35369,1689534910605 in 163 msec 2023-07-16 19:15:25,338 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-16 19:15:25,338 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=3b4ca5b5effeb0bb7f100934d18e0af5, UNASSIGN in 172 msec 2023-07-16 19:15:25,339 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534925338"}]},"ts":"1689534925338"} 2023-07-16 19:15:25,340 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-16 19:15:25,342 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-16 19:15:25,344 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 196 msec 2023-07-16 19:15:25,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-16 19:15:25,461 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-16 19:15:25,462 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-16 19:15:25,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 19:15:25,468 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 19:15:25,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1632824276' 2023-07-16 19:15:25,469 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 19:15:25,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1632824276 2023-07-16 19:15:25,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:25,473 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:25,475 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5/recovered.edits] 2023-07-16 19:15:25,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-16 19:15:25,482 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5/recovered.edits/7.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5/recovered.edits/7.seqid 2023-07-16 19:15:25,483 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/GrouptestMultiTableMoveB/3b4ca5b5effeb0bb7f100934d18e0af5 2023-07-16 19:15:25,483 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-16 19:15:25,486 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 19:15:25,488 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-16 19:15:25,490 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-16 19:15:25,491 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 19:15:25,491 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-16 19:15:25,491 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534925491"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:25,493 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 19:15:25,493 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 3b4ca5b5effeb0bb7f100934d18e0af5, NAME => 'GrouptestMultiTableMoveB,,1689534923057.3b4ca5b5effeb0bb7f100934d18e0af5.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 19:15:25,493 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-16 19:15:25,493 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689534925493"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:25,494 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-16 19:15:25,496 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 19:15:25,498 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 33 msec 2023-07-16 19:15:25,580 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-16 19:15:25,580 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-16 19:15:25,583 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,583 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,584 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:25,584 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:25,585 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:25,585 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35369] to rsgroup default 2023-07-16 19:15:25,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1632824276 2023-07-16 19:15:25,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:25,591 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1632824276, current retry=0 2023-07-16 19:15:25,592 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35369,1689534910605] are moved back to Group_testMultiTableMove_1632824276 2023-07-16 19:15:25,592 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1632824276 => default 2023-07-16 19:15:25,592 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:25,592 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_1632824276 2023-07-16 19:15:25,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,597 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 19:15:25,598 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:25,599 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:25,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:25,599 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:25,600 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:25,600 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:25,600 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:25,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:25,606 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:25,609 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:25,609 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:25,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:25,616 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:25,620 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,620 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,622 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:25,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:25,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 510 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41906 deadline: 1689536125622, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:25,623 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:25,625 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:25,625 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,626 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,626 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:25,627 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:25,627 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:25,645 INFO [Listener at localhost/36799] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=506 (was 510), OpenFileDescriptor=789 (was 818), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=463 (was 391) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=2857 (was 3027) 2023-07-16 19:15:25,645 WARN [Listener at localhost/36799] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-16 19:15:25,661 INFO [Listener at localhost/36799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=506, OpenFileDescriptor=789, MaxFileDescriptor=60000, SystemLoadAverage=463, ProcessCount=172, AvailableMemoryMB=2856 2023-07-16 19:15:25,661 WARN [Listener at localhost/36799] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-16 19:15:25,661 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-16 19:15:25,665 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,666 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,666 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:25,666 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:25,666 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:25,667 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:25,667 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:25,668 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:25,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:25,673 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:25,676 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:25,677 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:25,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:25,681 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:25,684 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,684 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,686 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:25,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:25,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 538 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41906 deadline: 1689536125686, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:25,687 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:25,689 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:25,689 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,689 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,690 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:25,690 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:25,690 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:25,691 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:25,691 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:25,692 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-16 19:15:25,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 19:15:25,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:25,706 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:25,709 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,709 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,711 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881] to rsgroup oldGroup 2023-07-16 19:15:25,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 19:15:25,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:25,716 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 19:15:25,716 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35369,1689534910605, jenkins-hbase4.apache.org,37881,1689534906681] are moved back to default 2023-07-16 19:15:25,716 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-16 19:15:25,716 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:25,719 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,719 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,721 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-16 19:15:25,721 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:25,722 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-16 19:15:25,722 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:25,723 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:25,723 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:25,723 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-16 19:15:25,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-16 19:15:25,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 19:15:25,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 19:15:25,728 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:25,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,733 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42201] to rsgroup anotherRSGroup 2023-07-16 19:15:25,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-16 19:15:25,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 19:15:25,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 19:15:25,737 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 19:15:25,738 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42201,1689534906603] are moved back to default 2023-07-16 19:15:25,738 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-16 19:15:25,738 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:25,740 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,740 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,743 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-16 19:15:25,743 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:25,744 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-16 19:15:25,744 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:25,750 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-16 19:15:25,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:25,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:41906 deadline: 1689536125749, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-16 19:15:25,752 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-16 19:15:25,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:25,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:41906 deadline: 1689536125752, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-16 19:15:25,753 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-16 19:15:25,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:25,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:41906 deadline: 1689536125753, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-16 19:15:25,754 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-16 19:15:25,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:25,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 578 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:41906 deadline: 1689536125754, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-16 19:15:25,759 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,759 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:25,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:25,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:25,761 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42201] to rsgroup default 2023-07-16 19:15:25,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-16 19:15:25,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 19:15:25,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 19:15:25,771 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-16 19:15:25,771 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42201,1689534906603] are moved back to anotherRSGroup 2023-07-16 19:15:25,771 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-16 19:15:25,771 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:25,773 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-16 19:15:25,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 19:15:25,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-16 19:15:25,786 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:25,787 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:25,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:25,787 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:25,788 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881] to rsgroup default 2023-07-16 19:15:25,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 19:15:25,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:25,793 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-16 19:15:25,793 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35369,1689534910605, jenkins-hbase4.apache.org,37881,1689534906681] are moved back to oldGroup 2023-07-16 19:15:25,793 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-16 19:15:25,793 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:25,794 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-16 19:15:25,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 19:15:25,799 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:25,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:25,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:25,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:25,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:25,800 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:25,801 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:25,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:25,807 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:25,809 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:25,810 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:25,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:25,814 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:25,817 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,817 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,819 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:25,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:25,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 614 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41906 deadline: 1689536125819, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:25,819 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:25,821 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:25,822 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,822 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,822 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:25,823 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:25,823 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:25,840 INFO [Listener at localhost/36799] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=510 (was 506) Potentially hanging thread: hconnection-0x62be270e-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=789 (was 789), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=463 (was 463), ProcessCount=172 (was 172), AvailableMemoryMB=2852 (was 2856) 2023-07-16 19:15:25,840 WARN [Listener at localhost/36799] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-16 19:15:25,856 INFO [Listener at localhost/36799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=510, OpenFileDescriptor=789, MaxFileDescriptor=60000, SystemLoadAverage=463, ProcessCount=172, AvailableMemoryMB=2852 2023-07-16 19:15:25,856 WARN [Listener at localhost/36799] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-16 19:15:25,856 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-16 19:15:25,860 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,860 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,861 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:25,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:25,862 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:25,862 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:25,863 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:25,864 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:25,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:25,870 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:25,872 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:25,873 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:25,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:25,882 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:25,884 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,885 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,886 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:25,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:25,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 642 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41906 deadline: 1689536125886, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:25,887 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:25,889 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:25,889 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,889 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,890 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:25,890 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:25,891 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:25,891 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:25,891 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:25,892 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-16 19:15:25,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 19:15:25,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:25,901 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:25,904 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,904 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,907 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881] to rsgroup oldgroup 2023-07-16 19:15:25,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 19:15:25,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:25,912 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 19:15:25,912 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35369,1689534910605, jenkins-hbase4.apache.org,37881,1689534906681] are moved back to default 2023-07-16 19:15:25,912 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-16 19:15:25,912 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:25,914 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:25,914 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:25,917 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-16 19:15:25,917 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:25,919 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:25,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-16 19:15:25,921 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:25,921 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-16 19:15:25,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-16 19:15:25,924 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 19:15:25,924 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:25,924 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:25,925 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:25,927 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 19:15:25,929 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:25,929 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804 empty. 2023-07-16 19:15:25,930 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:25,930 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-16 19:15:25,948 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:25,950 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => da6797ffc5b2850bae13a1d0baad6804, NAME => 'testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:25,964 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:25,964 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing da6797ffc5b2850bae13a1d0baad6804, disabling compactions & flushes 2023-07-16 19:15:25,964 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:25,964 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:25,964 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. after waiting 0 ms 2023-07-16 19:15:25,964 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:25,964 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:25,964 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for da6797ffc5b2850bae13a1d0baad6804: 2023-07-16 19:15:25,966 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 19:15:25,967 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689534925967"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534925967"}]},"ts":"1689534925967"} 2023-07-16 19:15:25,968 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 19:15:25,969 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 19:15:25,969 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534925969"}]},"ts":"1689534925969"} 2023-07-16 19:15:25,971 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-16 19:15:25,975 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:25,976 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:25,976 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:25,976 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:25,976 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=da6797ffc5b2850bae13a1d0baad6804, ASSIGN}] 2023-07-16 19:15:25,979 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=da6797ffc5b2850bae13a1d0baad6804, ASSIGN 2023-07-16 19:15:25,980 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=da6797ffc5b2850bae13a1d0baad6804, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42201,1689534906603; forceNewPlan=false, retain=false 2023-07-16 19:15:26,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-16 19:15:26,130 INFO [jenkins-hbase4:38143] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 19:15:26,131 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=da6797ffc5b2850bae13a1d0baad6804, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:26,131 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689534926131"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534926131"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534926131"}]},"ts":"1689534926131"} 2023-07-16 19:15:26,133 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure da6797ffc5b2850bae13a1d0baad6804, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:26,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-16 19:15:26,288 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:26,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => da6797ffc5b2850bae13a1d0baad6804, NAME => 'testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:26,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:26,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:26,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:26,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:26,291 INFO [StoreOpener-da6797ffc5b2850bae13a1d0baad6804-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:26,293 DEBUG [StoreOpener-da6797ffc5b2850bae13a1d0baad6804-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804/tr 2023-07-16 19:15:26,293 DEBUG [StoreOpener-da6797ffc5b2850bae13a1d0baad6804-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804/tr 2023-07-16 19:15:26,294 INFO [StoreOpener-da6797ffc5b2850bae13a1d0baad6804-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region da6797ffc5b2850bae13a1d0baad6804 columnFamilyName tr 2023-07-16 19:15:26,294 INFO [StoreOpener-da6797ffc5b2850bae13a1d0baad6804-1] regionserver.HStore(310): Store=da6797ffc5b2850bae13a1d0baad6804/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:26,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:26,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:26,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:26,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:26,302 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened da6797ffc5b2850bae13a1d0baad6804; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10024746240, jitterRate=-0.06637275218963623}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:26,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for da6797ffc5b2850bae13a1d0baad6804: 2023-07-16 19:15:26,304 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804., pid=116, masterSystemTime=1689534926284 2023-07-16 19:15:26,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:26,306 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:26,306 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=da6797ffc5b2850bae13a1d0baad6804, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:26,306 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689534926306"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534926306"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534926306"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534926306"}]},"ts":"1689534926306"} 2023-07-16 19:15:26,309 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-16 19:15:26,309 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure da6797ffc5b2850bae13a1d0baad6804, server=jenkins-hbase4.apache.org,42201,1689534906603 in 175 msec 2023-07-16 19:15:26,311 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-16 19:15:26,311 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=da6797ffc5b2850bae13a1d0baad6804, ASSIGN in 333 msec 2023-07-16 19:15:26,312 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 19:15:26,312 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534926312"}]},"ts":"1689534926312"} 2023-07-16 19:15:26,313 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-16 19:15:26,316 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 19:15:26,317 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 397 msec 2023-07-16 19:15:26,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-16 19:15:26,526 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-16 19:15:26,527 DEBUG [Listener at localhost/36799] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-16 19:15:26,527 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:26,531 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-16 19:15:26,531 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:26,531 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-16 19:15:26,533 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-16 19:15:26,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 19:15:26,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:26,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:26,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:26,539 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-16 19:15:26,540 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(345): Moving region da6797ffc5b2850bae13a1d0baad6804 to RSGroup oldgroup 2023-07-16 19:15:26,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:26,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:26,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:26,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:26,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:26,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=da6797ffc5b2850bae13a1d0baad6804, REOPEN/MOVE 2023-07-16 19:15:26,541 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-16 19:15:26,541 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=da6797ffc5b2850bae13a1d0baad6804, REOPEN/MOVE 2023-07-16 19:15:26,542 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=da6797ffc5b2850bae13a1d0baad6804, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:26,542 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689534926542"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534926542"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534926542"}]},"ts":"1689534926542"} 2023-07-16 19:15:26,543 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure da6797ffc5b2850bae13a1d0baad6804, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:26,697 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:26,698 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing da6797ffc5b2850bae13a1d0baad6804, disabling compactions & flushes 2023-07-16 19:15:26,698 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:26,698 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:26,698 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. after waiting 0 ms 2023-07-16 19:15:26,698 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:26,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:26,703 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:26,703 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for da6797ffc5b2850bae13a1d0baad6804: 2023-07-16 19:15:26,703 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding da6797ffc5b2850bae13a1d0baad6804 move to jenkins-hbase4.apache.org,37881,1689534906681 record at close sequenceid=2 2023-07-16 19:15:26,704 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:26,705 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=da6797ffc5b2850bae13a1d0baad6804, regionState=CLOSED 2023-07-16 19:15:26,705 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689534926705"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534926705"}]},"ts":"1689534926705"} 2023-07-16 19:15:26,707 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-16 19:15:26,707 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure da6797ffc5b2850bae13a1d0baad6804, server=jenkins-hbase4.apache.org,42201,1689534906603 in 163 msec 2023-07-16 19:15:26,708 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=da6797ffc5b2850bae13a1d0baad6804, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37881,1689534906681; forceNewPlan=false, retain=false 2023-07-16 19:15:26,858 INFO [jenkins-hbase4:38143] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 19:15:26,858 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=da6797ffc5b2850bae13a1d0baad6804, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:26,859 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689534926858"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534926858"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534926858"}]},"ts":"1689534926858"} 2023-07-16 19:15:26,860 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure da6797ffc5b2850bae13a1d0baad6804, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:27,016 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:27,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => da6797ffc5b2850bae13a1d0baad6804, NAME => 'testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:27,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:27,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:27,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:27,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:27,017 INFO [StoreOpener-da6797ffc5b2850bae13a1d0baad6804-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:27,018 DEBUG [StoreOpener-da6797ffc5b2850bae13a1d0baad6804-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804/tr 2023-07-16 19:15:27,019 DEBUG [StoreOpener-da6797ffc5b2850bae13a1d0baad6804-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804/tr 2023-07-16 19:15:27,019 INFO [StoreOpener-da6797ffc5b2850bae13a1d0baad6804-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region da6797ffc5b2850bae13a1d0baad6804 columnFamilyName tr 2023-07-16 19:15:27,019 INFO [StoreOpener-da6797ffc5b2850bae13a1d0baad6804-1] regionserver.HStore(310): Store=da6797ffc5b2850bae13a1d0baad6804/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:27,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:27,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:27,024 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:27,024 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened da6797ffc5b2850bae13a1d0baad6804; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11571950400, jitterRate=0.07772186398506165}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:27,024 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for da6797ffc5b2850bae13a1d0baad6804: 2023-07-16 19:15:27,025 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804., pid=119, masterSystemTime=1689534927012 2023-07-16 19:15:27,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:27,026 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:27,027 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=da6797ffc5b2850bae13a1d0baad6804, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:27,027 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689534927027"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534927027"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534927027"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534927027"}]},"ts":"1689534927027"} 2023-07-16 19:15:27,030 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-16 19:15:27,030 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure da6797ffc5b2850bae13a1d0baad6804, server=jenkins-hbase4.apache.org,37881,1689534906681 in 168 msec 2023-07-16 19:15:27,031 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=da6797ffc5b2850bae13a1d0baad6804, REOPEN/MOVE in 490 msec 2023-07-16 19:15:27,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-16 19:15:27,541 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-16 19:15:27,541 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:27,546 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:27,546 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:27,551 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:27,553 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-16 19:15:27,553 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:27,554 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-16 19:15:27,554 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:27,556 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-16 19:15:27,556 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:27,556 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:27,557 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:27,558 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-16 19:15:27,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 19:15:27,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 19:15:27,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:27,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:27,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 19:15:27,575 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:27,580 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:27,580 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:27,586 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42201] to rsgroup normal 2023-07-16 19:15:27,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 19:15:27,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 19:15:27,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:27,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:27,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 19:15:27,594 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 19:15:27,594 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42201,1689534906603] are moved back to default 2023-07-16 19:15:27,594 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-16 19:15:27,594 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:27,598 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:27,598 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:27,601 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-16 19:15:27,601 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:27,603 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:27,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-16 19:15:27,607 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:27,608 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-16 19:15:27,610 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 19:15:27,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-16 19:15:27,610 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 19:15:27,611 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:27,611 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:27,612 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 19:15:27,614 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 19:15:27,616 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:27,617 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5 empty. 2023-07-16 19:15:27,617 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:27,618 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-16 19:15:27,641 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:27,643 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 39dcd59af26924a92354f901d71240d5, NAME => 'unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:27,663 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:27,663 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 39dcd59af26924a92354f901d71240d5, disabling compactions & flushes 2023-07-16 19:15:27,663 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:27,663 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:27,663 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. after waiting 0 ms 2023-07-16 19:15:27,663 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:27,663 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:27,663 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 39dcd59af26924a92354f901d71240d5: 2023-07-16 19:15:27,666 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 19:15:27,667 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689534927667"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534927667"}]},"ts":"1689534927667"} 2023-07-16 19:15:27,668 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 19:15:27,669 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 19:15:27,669 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534927669"}]},"ts":"1689534927669"} 2023-07-16 19:15:27,670 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-16 19:15:27,674 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=39dcd59af26924a92354f901d71240d5, ASSIGN}] 2023-07-16 19:15:27,676 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=39dcd59af26924a92354f901d71240d5, ASSIGN 2023-07-16 19:15:27,677 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=39dcd59af26924a92354f901d71240d5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46561,1689534906430; forceNewPlan=false, retain=false 2023-07-16 19:15:27,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-16 19:15:27,828 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=39dcd59af26924a92354f901d71240d5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:27,829 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689534927828"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534927828"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534927828"}]},"ts":"1689534927828"} 2023-07-16 19:15:27,830 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure 39dcd59af26924a92354f901d71240d5, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:27,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-16 19:15:27,988 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:27,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 39dcd59af26924a92354f901d71240d5, NAME => 'unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:27,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:27,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:27,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:27,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:27,995 INFO [StoreOpener-39dcd59af26924a92354f901d71240d5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:27,997 DEBUG [StoreOpener-39dcd59af26924a92354f901d71240d5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5/ut 2023-07-16 19:15:27,998 DEBUG [StoreOpener-39dcd59af26924a92354f901d71240d5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5/ut 2023-07-16 19:15:27,998 INFO [StoreOpener-39dcd59af26924a92354f901d71240d5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 39dcd59af26924a92354f901d71240d5 columnFamilyName ut 2023-07-16 19:15:27,999 INFO [StoreOpener-39dcd59af26924a92354f901d71240d5-1] regionserver.HStore(310): Store=39dcd59af26924a92354f901d71240d5/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:28,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:28,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:28,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:28,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:28,012 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 39dcd59af26924a92354f901d71240d5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10126647040, jitterRate=-0.056882500648498535}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:28,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 39dcd59af26924a92354f901d71240d5: 2023-07-16 19:15:28,013 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5., pid=122, masterSystemTime=1689534927982 2023-07-16 19:15:28,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:28,015 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:28,016 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=39dcd59af26924a92354f901d71240d5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:28,016 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689534928016"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534928016"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534928016"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534928016"}]},"ts":"1689534928016"} 2023-07-16 19:15:28,019 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-16 19:15:28,020 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure 39dcd59af26924a92354f901d71240d5, server=jenkins-hbase4.apache.org,46561,1689534906430 in 188 msec 2023-07-16 19:15:28,021 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-16 19:15:28,021 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=39dcd59af26924a92354f901d71240d5, ASSIGN in 345 msec 2023-07-16 19:15:28,022 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 19:15:28,022 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534928022"}]},"ts":"1689534928022"} 2023-07-16 19:15:28,023 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-16 19:15:28,026 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 19:15:28,027 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 423 msec 2023-07-16 19:15:28,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-16 19:15:28,213 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-16 19:15:28,214 DEBUG [Listener at localhost/36799] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-16 19:15:28,214 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:28,218 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-16 19:15:28,218 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:28,218 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-16 19:15:28,220 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-16 19:15:28,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 19:15:28,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 19:15:28,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:28,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:28,231 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 19:15:28,233 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-16 19:15:28,233 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(345): Moving region 39dcd59af26924a92354f901d71240d5 to RSGroup normal 2023-07-16 19:15:28,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=39dcd59af26924a92354f901d71240d5, REOPEN/MOVE 2023-07-16 19:15:28,234 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-16 19:15:28,234 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=39dcd59af26924a92354f901d71240d5, REOPEN/MOVE 2023-07-16 19:15:28,235 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=39dcd59af26924a92354f901d71240d5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:28,235 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689534928235"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534928235"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534928235"}]},"ts":"1689534928235"} 2023-07-16 19:15:28,236 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 39dcd59af26924a92354f901d71240d5, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:28,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:28,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 39dcd59af26924a92354f901d71240d5, disabling compactions & flushes 2023-07-16 19:15:28,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:28,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:28,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. after waiting 0 ms 2023-07-16 19:15:28,393 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:28,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:28,398 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:28,398 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 39dcd59af26924a92354f901d71240d5: 2023-07-16 19:15:28,398 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 39dcd59af26924a92354f901d71240d5 move to jenkins-hbase4.apache.org,42201,1689534906603 record at close sequenceid=2 2023-07-16 19:15:28,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:28,401 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=39dcd59af26924a92354f901d71240d5, regionState=CLOSED 2023-07-16 19:15:28,401 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689534928401"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534928401"}]},"ts":"1689534928401"} 2023-07-16 19:15:28,404 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-16 19:15:28,404 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 39dcd59af26924a92354f901d71240d5, server=jenkins-hbase4.apache.org,46561,1689534906430 in 166 msec 2023-07-16 19:15:28,404 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=39dcd59af26924a92354f901d71240d5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42201,1689534906603; forceNewPlan=false, retain=false 2023-07-16 19:15:28,555 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=39dcd59af26924a92354f901d71240d5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:28,555 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689534928555"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534928555"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534928555"}]},"ts":"1689534928555"} 2023-07-16 19:15:28,557 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 39dcd59af26924a92354f901d71240d5, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:28,714 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:28,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 39dcd59af26924a92354f901d71240d5, NAME => 'unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:28,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:28,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:28,715 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:28,715 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:28,717 INFO [StoreOpener-39dcd59af26924a92354f901d71240d5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:28,718 DEBUG [StoreOpener-39dcd59af26924a92354f901d71240d5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5/ut 2023-07-16 19:15:28,718 DEBUG [StoreOpener-39dcd59af26924a92354f901d71240d5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5/ut 2023-07-16 19:15:28,719 INFO [StoreOpener-39dcd59af26924a92354f901d71240d5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 39dcd59af26924a92354f901d71240d5 columnFamilyName ut 2023-07-16 19:15:28,719 INFO [StoreOpener-39dcd59af26924a92354f901d71240d5-1] regionserver.HStore(310): Store=39dcd59af26924a92354f901d71240d5/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:28,720 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:28,721 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:28,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:28,725 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 39dcd59af26924a92354f901d71240d5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10322115840, jitterRate=-0.03867805004119873}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:28,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 39dcd59af26924a92354f901d71240d5: 2023-07-16 19:15:28,726 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5., pid=125, masterSystemTime=1689534928709 2023-07-16 19:15:28,727 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:28,727 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:28,728 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=39dcd59af26924a92354f901d71240d5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:28,728 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689534928728"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534928728"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534928728"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534928728"}]},"ts":"1689534928728"} 2023-07-16 19:15:28,731 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-16 19:15:28,731 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 39dcd59af26924a92354f901d71240d5, server=jenkins-hbase4.apache.org,42201,1689534906603 in 172 msec 2023-07-16 19:15:28,732 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=39dcd59af26924a92354f901d71240d5, REOPEN/MOVE in 498 msec 2023-07-16 19:15:29,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-16 19:15:29,234 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-16 19:15:29,234 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:29,238 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:29,238 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:29,240 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:29,241 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-16 19:15:29,241 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:29,242 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-16 19:15:29,242 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:29,243 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-16 19:15:29,243 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:29,244 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-16 19:15:29,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 19:15:29,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:29,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:29,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 19:15:29,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-16 19:15:29,250 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-16 19:15:29,253 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:29,253 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:29,256 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-16 19:15:29,256 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:29,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-16 19:15:29,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:29,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-16 19:15:29,258 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:29,261 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:29,261 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:29,262 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-16 19:15:29,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 19:15:29,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:29,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:29,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 19:15:29,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 19:15:29,271 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-16 19:15:29,271 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(345): Moving region 39dcd59af26924a92354f901d71240d5 to RSGroup default 2023-07-16 19:15:29,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=39dcd59af26924a92354f901d71240d5, REOPEN/MOVE 2023-07-16 19:15:29,272 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-16 19:15:29,272 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=39dcd59af26924a92354f901d71240d5, REOPEN/MOVE 2023-07-16 19:15:29,273 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=39dcd59af26924a92354f901d71240d5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:29,273 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689534929273"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534929273"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534929273"}]},"ts":"1689534929273"} 2023-07-16 19:15:29,274 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 39dcd59af26924a92354f901d71240d5, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:29,427 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:29,428 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 39dcd59af26924a92354f901d71240d5, disabling compactions & flushes 2023-07-16 19:15:29,428 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:29,428 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:29,428 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. after waiting 0 ms 2023-07-16 19:15:29,428 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:29,432 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 19:15:29,433 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:29,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 39dcd59af26924a92354f901d71240d5: 2023-07-16 19:15:29,433 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 39dcd59af26924a92354f901d71240d5 move to jenkins-hbase4.apache.org,46561,1689534906430 record at close sequenceid=5 2023-07-16 19:15:29,435 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:29,435 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=39dcd59af26924a92354f901d71240d5, regionState=CLOSED 2023-07-16 19:15:29,435 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689534929435"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534929435"}]},"ts":"1689534929435"} 2023-07-16 19:15:29,438 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-16 19:15:29,438 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 39dcd59af26924a92354f901d71240d5, server=jenkins-hbase4.apache.org,42201,1689534906603 in 163 msec 2023-07-16 19:15:29,438 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=39dcd59af26924a92354f901d71240d5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46561,1689534906430; forceNewPlan=false, retain=false 2023-07-16 19:15:29,589 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=39dcd59af26924a92354f901d71240d5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:29,589 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689534929589"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534929589"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534929589"}]},"ts":"1689534929589"} 2023-07-16 19:15:29,591 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 39dcd59af26924a92354f901d71240d5, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:29,746 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:29,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 39dcd59af26924a92354f901d71240d5, NAME => 'unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:29,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:29,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:29,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:29,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:29,748 INFO [StoreOpener-39dcd59af26924a92354f901d71240d5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:29,750 DEBUG [StoreOpener-39dcd59af26924a92354f901d71240d5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5/ut 2023-07-16 19:15:29,750 DEBUG [StoreOpener-39dcd59af26924a92354f901d71240d5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5/ut 2023-07-16 19:15:29,750 INFO [StoreOpener-39dcd59af26924a92354f901d71240d5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 39dcd59af26924a92354f901d71240d5 columnFamilyName ut 2023-07-16 19:15:29,751 INFO [StoreOpener-39dcd59af26924a92354f901d71240d5-1] regionserver.HStore(310): Store=39dcd59af26924a92354f901d71240d5/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:29,751 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:29,753 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:29,755 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:29,756 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 39dcd59af26924a92354f901d71240d5; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9977584160, jitterRate=-0.07076506316661835}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:29,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 39dcd59af26924a92354f901d71240d5: 2023-07-16 19:15:29,756 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5., pid=128, masterSystemTime=1689534929742 2023-07-16 19:15:29,758 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:29,758 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:29,758 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=39dcd59af26924a92354f901d71240d5, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:29,758 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689534929758"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534929758"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534929758"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534929758"}]},"ts":"1689534929758"} 2023-07-16 19:15:29,762 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-16 19:15:29,762 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 39dcd59af26924a92354f901d71240d5, server=jenkins-hbase4.apache.org,46561,1689534906430 in 169 msec 2023-07-16 19:15:29,763 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=39dcd59af26924a92354f901d71240d5, REOPEN/MOVE in 491 msec 2023-07-16 19:15:29,973 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 19:15:30,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-16 19:15:30,272 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-16 19:15:30,272 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:30,274 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42201] to rsgroup default 2023-07-16 19:15:30,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 19:15:30,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:30,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:30,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 19:15:30,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 19:15:30,279 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-16 19:15:30,279 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42201,1689534906603] are moved back to normal 2023-07-16 19:15:30,279 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-16 19:15:30,279 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:30,280 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-16 19:15:30,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:30,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:30,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 19:15:30,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-16 19:15:30,286 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:30,286 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:30,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:30,287 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:30,287 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:30,287 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:30,287 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:30,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:30,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 19:15:30,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 19:15:30,292 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:30,294 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-16 19:15:30,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:30,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 19:15:30,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:30,298 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-16 19:15:30,298 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(345): Moving region da6797ffc5b2850bae13a1d0baad6804 to RSGroup default 2023-07-16 19:15:30,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=da6797ffc5b2850bae13a1d0baad6804, REOPEN/MOVE 2023-07-16 19:15:30,299 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-16 19:15:30,299 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=da6797ffc5b2850bae13a1d0baad6804, REOPEN/MOVE 2023-07-16 19:15:30,300 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=da6797ffc5b2850bae13a1d0baad6804, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:30,300 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689534930300"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534930300"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534930300"}]},"ts":"1689534930300"} 2023-07-16 19:15:30,301 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure da6797ffc5b2850bae13a1d0baad6804, server=jenkins-hbase4.apache.org,37881,1689534906681}] 2023-07-16 19:15:30,454 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:30,455 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing da6797ffc5b2850bae13a1d0baad6804, disabling compactions & flushes 2023-07-16 19:15:30,455 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:30,455 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:30,455 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. after waiting 0 ms 2023-07-16 19:15:30,455 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:30,460 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 19:15:30,462 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:30,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for da6797ffc5b2850bae13a1d0baad6804: 2023-07-16 19:15:30,462 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding da6797ffc5b2850bae13a1d0baad6804 move to jenkins-hbase4.apache.org,42201,1689534906603 record at close sequenceid=5 2023-07-16 19:15:30,464 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:30,464 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=da6797ffc5b2850bae13a1d0baad6804, regionState=CLOSED 2023-07-16 19:15:30,465 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689534930464"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534930464"}]},"ts":"1689534930464"} 2023-07-16 19:15:30,468 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-16 19:15:30,468 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure da6797ffc5b2850bae13a1d0baad6804, server=jenkins-hbase4.apache.org,37881,1689534906681 in 165 msec 2023-07-16 19:15:30,469 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=da6797ffc5b2850bae13a1d0baad6804, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42201,1689534906603; forceNewPlan=false, retain=false 2023-07-16 19:15:30,619 INFO [jenkins-hbase4:38143] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 19:15:30,620 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=da6797ffc5b2850bae13a1d0baad6804, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:30,620 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689534930619"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534930619"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534930619"}]},"ts":"1689534930619"} 2023-07-16 19:15:30,621 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure da6797ffc5b2850bae13a1d0baad6804, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:30,777 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:30,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => da6797ffc5b2850bae13a1d0baad6804, NAME => 'testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:30,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:30,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:30,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:30,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:30,780 INFO [StoreOpener-da6797ffc5b2850bae13a1d0baad6804-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:30,781 DEBUG [StoreOpener-da6797ffc5b2850bae13a1d0baad6804-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804/tr 2023-07-16 19:15:30,781 DEBUG [StoreOpener-da6797ffc5b2850bae13a1d0baad6804-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804/tr 2023-07-16 19:15:30,781 INFO [StoreOpener-da6797ffc5b2850bae13a1d0baad6804-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region da6797ffc5b2850bae13a1d0baad6804 columnFamilyName tr 2023-07-16 19:15:30,782 INFO [StoreOpener-da6797ffc5b2850bae13a1d0baad6804-1] regionserver.HStore(310): Store=da6797ffc5b2850bae13a1d0baad6804/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:30,783 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:30,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:30,789 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:30,790 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened da6797ffc5b2850bae13a1d0baad6804; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10624433280, jitterRate=-0.010522544384002686}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:30,790 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for da6797ffc5b2850bae13a1d0baad6804: 2023-07-16 19:15:30,791 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804., pid=131, masterSystemTime=1689534930773 2023-07-16 19:15:30,793 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:30,794 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:30,794 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=da6797ffc5b2850bae13a1d0baad6804, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:30,794 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689534930794"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534930794"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534930794"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534930794"}]},"ts":"1689534930794"} 2023-07-16 19:15:30,799 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-16 19:15:30,800 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure da6797ffc5b2850bae13a1d0baad6804, server=jenkins-hbase4.apache.org,42201,1689534906603 in 175 msec 2023-07-16 19:15:30,800 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=da6797ffc5b2850bae13a1d0baad6804, REOPEN/MOVE in 501 msec 2023-07-16 19:15:31,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-16 19:15:31,299 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-16 19:15:31,299 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:31,301 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881] to rsgroup default 2023-07-16 19:15:31,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:31,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 19:15:31,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:31,306 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-16 19:15:31,306 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35369,1689534910605, jenkins-hbase4.apache.org,37881,1689534906681] are moved back to newgroup 2023-07-16 19:15:31,306 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-16 19:15:31,306 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:31,307 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-16 19:15:31,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:31,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:31,313 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:31,317 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:31,318 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:31,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:31,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:31,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:31,327 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:31,330 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:31,330 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:31,332 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:31,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:31,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 762 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41906 deadline: 1689536131332, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:31,333 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:31,334 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:31,335 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:31,335 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:31,335 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:31,336 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:31,336 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:31,354 INFO [Listener at localhost/36799] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=508 (was 510), OpenFileDescriptor=770 (was 789), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=482 (was 463) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=2738 (was 2852) 2023-07-16 19:15:31,354 WARN [Listener at localhost/36799] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-16 19:15:31,370 INFO [Listener at localhost/36799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=508, OpenFileDescriptor=770, MaxFileDescriptor=60000, SystemLoadAverage=482, ProcessCount=172, AvailableMemoryMB=2738 2023-07-16 19:15:31,371 WARN [Listener at localhost/36799] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-16 19:15:31,371 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-16 19:15:31,375 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:31,375 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:31,376 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:31,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:31,376 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:31,376 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:31,377 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:31,377 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:31,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:31,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:31,383 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:31,386 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:31,387 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:31,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:31,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:31,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:31,392 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:31,395 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:31,395 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:31,397 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:31,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:31,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 790 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41906 deadline: 1689536131397, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:31,398 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:31,400 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:31,400 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:31,401 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:31,401 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:31,401 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:31,401 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:31,402 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-16 19:15:31,402 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:31,408 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-16 19:15:31,408 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-16 19:15:31,409 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-16 19:15:31,409 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:31,409 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-16 19:15:31,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:31,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 802 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:41906 deadline: 1689536131409, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-16 19:15:31,411 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-16 19:15:31,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:31,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 805 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:41906 deadline: 1689536131411, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-16 19:15:31,414 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-16 19:15:31,415 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-16 19:15:31,420 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-16 19:15:31,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:31,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 809 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:41906 deadline: 1689536131419, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-16 19:15:31,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:31,424 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:31,425 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:31,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:31,425 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:31,426 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:31,426 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:31,426 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:31,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:31,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:31,432 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:31,434 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:31,435 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:31,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:31,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:31,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:31,443 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:31,446 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:31,447 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:31,448 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:31,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:31,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 833 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41906 deadline: 1689536131448, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:31,452 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:31,453 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:31,454 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:31,454 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:31,454 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:31,455 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:31,455 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:31,474 INFO [Listener at localhost/36799] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=512 (was 508) Potentially hanging thread: hconnection-0x18bd1b24-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x18bd1b24-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-21 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x62be270e-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=770 (was 770), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=482 (was 482), ProcessCount=172 (was 172), AvailableMemoryMB=2732 (was 2738) 2023-07-16 19:15:31,475 WARN [Listener at localhost/36799] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-16 19:15:31,495 INFO [Listener at localhost/36799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=512, OpenFileDescriptor=770, MaxFileDescriptor=60000, SystemLoadAverage=482, ProcessCount=172, AvailableMemoryMB=2730 2023-07-16 19:15:31,495 WARN [Listener at localhost/36799] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-16 19:15:31,495 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-16 19:15:31,499 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:31,500 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:31,501 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:31,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:31,502 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:31,503 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:31,503 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:31,504 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:31,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:31,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:31,511 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:31,514 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:31,516 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:31,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:31,519 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:31,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:31,523 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:31,526 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:31,526 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:31,528 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:31,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:31,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 861 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41906 deadline: 1689536131528, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:31,529 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:31,531 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:31,532 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:31,532 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:31,532 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:31,533 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:31,533 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:31,534 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:31,534 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:31,535 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_498383658 2023-07-16 19:15:31,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_498383658 2023-07-16 19:15:31,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:31,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:31,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:31,548 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:31,551 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:31,552 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:31,560 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881] to rsgroup Group_testDisabledTableMove_498383658 2023-07-16 19:15:31,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:31,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_498383658 2023-07-16 19:15:31,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:31,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:31,566 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 19:15:31,566 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35369,1689534910605, jenkins-hbase4.apache.org,37881,1689534906681] are moved back to default 2023-07-16 19:15:31,566 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_498383658 2023-07-16 19:15:31,566 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:31,570 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:31,570 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:31,575 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_498383658 2023-07-16 19:15:31,575 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:31,577 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:31,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-16 19:15:31,581 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:31,582 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-16 19:15:31,583 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-16 19:15:31,583 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:31,584 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_498383658 2023-07-16 19:15:31,584 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:31,584 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:31,590 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 19:15:31,595 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/7e6172bb2061c3f531f39b9e12c57401 2023-07-16 19:15:31,595 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/422d685570e91fa1ee17b3acdcc08223 2023-07-16 19:15:31,595 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/85b368c440f00aab23f334328127a6e7 2023-07-16 19:15:31,595 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/105a27cb033e5a1828be55c43553c645 2023-07-16 19:15:31,595 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/21afff1b290f2da5412b84c94e99f893 2023-07-16 19:15:31,596 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/7e6172bb2061c3f531f39b9e12c57401 empty. 2023-07-16 19:15:31,596 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/422d685570e91fa1ee17b3acdcc08223 empty. 2023-07-16 19:15:31,596 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/21afff1b290f2da5412b84c94e99f893 empty. 2023-07-16 19:15:31,596 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/105a27cb033e5a1828be55c43553c645 empty. 2023-07-16 19:15:31,596 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/85b368c440f00aab23f334328127a6e7 empty. 2023-07-16 19:15:31,597 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/422d685570e91fa1ee17b3acdcc08223 2023-07-16 19:15:31,597 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/7e6172bb2061c3f531f39b9e12c57401 2023-07-16 19:15:31,597 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/105a27cb033e5a1828be55c43553c645 2023-07-16 19:15:31,597 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/85b368c440f00aab23f334328127a6e7 2023-07-16 19:15:31,597 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/21afff1b290f2da5412b84c94e99f893 2023-07-16 19:15:31,597 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-16 19:15:31,619 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:31,621 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7e6172bb2061c3f531f39b9e12c57401, NAME => 'Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:31,621 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 105a27cb033e5a1828be55c43553c645, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:31,621 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 21afff1b290f2da5412b84c94e99f893, NAME => 'Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:31,683 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:31,683 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 7e6172bb2061c3f531f39b9e12c57401, disabling compactions & flushes 2023-07-16 19:15:31,683 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401. 2023-07-16 19:15:31,683 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401. 2023-07-16 19:15:31,683 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401. after waiting 0 ms 2023-07-16 19:15:31,683 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401. 2023-07-16 19:15:31,683 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401. 2023-07-16 19:15:31,683 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 7e6172bb2061c3f531f39b9e12c57401: 2023-07-16 19:15:31,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-16 19:15:31,684 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 422d685570e91fa1ee17b3acdcc08223, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:31,687 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:31,688 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:31,688 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 105a27cb033e5a1828be55c43553c645, disabling compactions & flushes 2023-07-16 19:15:31,688 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 21afff1b290f2da5412b84c94e99f893, disabling compactions & flushes 2023-07-16 19:15:31,688 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645. 2023-07-16 19:15:31,688 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645. 2023-07-16 19:15:31,688 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645. after waiting 0 ms 2023-07-16 19:15:31,688 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645. 2023-07-16 19:15:31,688 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645. 2023-07-16 19:15:31,688 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893. 2023-07-16 19:15:31,688 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 105a27cb033e5a1828be55c43553c645: 2023-07-16 19:15:31,688 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893. 2023-07-16 19:15:31,689 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893. after waiting 0 ms 2023-07-16 19:15:31,689 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 85b368c440f00aab23f334328127a6e7, NAME => 'Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp 2023-07-16 19:15:31,689 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893. 2023-07-16 19:15:31,689 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893. 2023-07-16 19:15:31,689 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 21afff1b290f2da5412b84c94e99f893: 2023-07-16 19:15:31,699 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:31,699 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 422d685570e91fa1ee17b3acdcc08223, disabling compactions & flushes 2023-07-16 19:15:31,699 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223. 2023-07-16 19:15:31,699 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223. 2023-07-16 19:15:31,699 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223. after waiting 0 ms 2023-07-16 19:15:31,699 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223. 2023-07-16 19:15:31,699 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223. 2023-07-16 19:15:31,699 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 422d685570e91fa1ee17b3acdcc08223: 2023-07-16 19:15:31,714 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:31,714 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 85b368c440f00aab23f334328127a6e7, disabling compactions & flushes 2023-07-16 19:15:31,714 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7. 2023-07-16 19:15:31,714 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7. 2023-07-16 19:15:31,714 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7. after waiting 0 ms 2023-07-16 19:15:31,714 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7. 2023-07-16 19:15:31,714 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7. 2023-07-16 19:15:31,714 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 85b368c440f00aab23f334328127a6e7: 2023-07-16 19:15:31,717 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 19:15:31,718 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689534931718"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534931718"}]},"ts":"1689534931718"} 2023-07-16 19:15:31,719 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689534931577.105a27cb033e5a1828be55c43553c645.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689534931718"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534931718"}]},"ts":"1689534931718"} 2023-07-16 19:15:31,719 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689534931718"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534931718"}]},"ts":"1689534931718"} 2023-07-16 19:15:31,719 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689534931718"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534931718"}]},"ts":"1689534931718"} 2023-07-16 19:15:31,719 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689534931718"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534931718"}]},"ts":"1689534931718"} 2023-07-16 19:15:31,721 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-16 19:15:31,722 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 19:15:31,722 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534931722"}]},"ts":"1689534931722"} 2023-07-16 19:15:31,723 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-16 19:15:31,726 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:31,726 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:31,727 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:31,727 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:31,727 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7e6172bb2061c3f531f39b9e12c57401, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=21afff1b290f2da5412b84c94e99f893, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=105a27cb033e5a1828be55c43553c645, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=422d685570e91fa1ee17b3acdcc08223, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=85b368c440f00aab23f334328127a6e7, ASSIGN}] 2023-07-16 19:15:31,729 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=422d685570e91fa1ee17b3acdcc08223, ASSIGN 2023-07-16 19:15:31,729 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=105a27cb033e5a1828be55c43553c645, ASSIGN 2023-07-16 19:15:31,729 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=21afff1b290f2da5412b84c94e99f893, ASSIGN 2023-07-16 19:15:31,730 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7e6172bb2061c3f531f39b9e12c57401, ASSIGN 2023-07-16 19:15:31,730 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=85b368c440f00aab23f334328127a6e7, ASSIGN 2023-07-16 19:15:31,730 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=422d685570e91fa1ee17b3acdcc08223, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46561,1689534906430; forceNewPlan=false, retain=false 2023-07-16 19:15:31,731 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=21afff1b290f2da5412b84c94e99f893, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42201,1689534906603; forceNewPlan=false, retain=false 2023-07-16 19:15:31,731 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=85b368c440f00aab23f334328127a6e7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42201,1689534906603; forceNewPlan=false, retain=false 2023-07-16 19:15:31,731 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7e6172bb2061c3f531f39b9e12c57401, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42201,1689534906603; forceNewPlan=false, retain=false 2023-07-16 19:15:31,731 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=105a27cb033e5a1828be55c43553c645, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46561,1689534906430; forceNewPlan=false, retain=false 2023-07-16 19:15:31,881 INFO [jenkins-hbase4:38143] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-16 19:15:31,885 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=105a27cb033e5a1828be55c43553c645, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:31,885 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=422d685570e91fa1ee17b3acdcc08223, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:31,885 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=85b368c440f00aab23f334328127a6e7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:31,885 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689534931885"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534931885"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534931885"}]},"ts":"1689534931885"} 2023-07-16 19:15:31,885 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689534931885"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534931885"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534931885"}]},"ts":"1689534931885"} 2023-07-16 19:15:31,885 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689534931577.105a27cb033e5a1828be55c43553c645.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689534931885"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534931885"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534931885"}]},"ts":"1689534931885"} 2023-07-16 19:15:31,885 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=21afff1b290f2da5412b84c94e99f893, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:31,885 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689534931885"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534931885"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534931885"}]},"ts":"1689534931885"} 2023-07-16 19:15:31,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-16 19:15:31,885 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=7e6172bb2061c3f531f39b9e12c57401, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:31,886 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689534931885"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534931885"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534931885"}]},"ts":"1689534931885"} 2023-07-16 19:15:31,887 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=136, state=RUNNABLE; OpenRegionProcedure 422d685570e91fa1ee17b3acdcc08223, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:31,888 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=135, state=RUNNABLE; OpenRegionProcedure 105a27cb033e5a1828be55c43553c645, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:31,892 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=134, state=RUNNABLE; OpenRegionProcedure 21afff1b290f2da5412b84c94e99f893, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:31,893 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=137, state=RUNNABLE; OpenRegionProcedure 85b368c440f00aab23f334328127a6e7, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:31,893 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=133, state=RUNNABLE; OpenRegionProcedure 7e6172bb2061c3f531f39b9e12c57401, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:32,043 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645. 2023-07-16 19:15:32,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 105a27cb033e5a1828be55c43553c645, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-16 19:15:32,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 105a27cb033e5a1828be55c43553c645 2023-07-16 19:15:32,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:32,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 105a27cb033e5a1828be55c43553c645 2023-07-16 19:15:32,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 105a27cb033e5a1828be55c43553c645 2023-07-16 19:15:32,046 INFO [StoreOpener-105a27cb033e5a1828be55c43553c645-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 105a27cb033e5a1828be55c43553c645 2023-07-16 19:15:32,048 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893. 2023-07-16 19:15:32,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 21afff1b290f2da5412b84c94e99f893, NAME => 'Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-16 19:15:32,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 21afff1b290f2da5412b84c94e99f893 2023-07-16 19:15:32,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:32,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 21afff1b290f2da5412b84c94e99f893 2023-07-16 19:15:32,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 21afff1b290f2da5412b84c94e99f893 2023-07-16 19:15:32,050 INFO [StoreOpener-21afff1b290f2da5412b84c94e99f893-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 21afff1b290f2da5412b84c94e99f893 2023-07-16 19:15:32,053 DEBUG [StoreOpener-105a27cb033e5a1828be55c43553c645-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/105a27cb033e5a1828be55c43553c645/f 2023-07-16 19:15:32,053 DEBUG [StoreOpener-105a27cb033e5a1828be55c43553c645-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/105a27cb033e5a1828be55c43553c645/f 2023-07-16 19:15:32,053 INFO [StoreOpener-105a27cb033e5a1828be55c43553c645-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 105a27cb033e5a1828be55c43553c645 columnFamilyName f 2023-07-16 19:15:32,053 DEBUG [StoreOpener-21afff1b290f2da5412b84c94e99f893-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/21afff1b290f2da5412b84c94e99f893/f 2023-07-16 19:15:32,053 DEBUG [StoreOpener-21afff1b290f2da5412b84c94e99f893-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/21afff1b290f2da5412b84c94e99f893/f 2023-07-16 19:15:32,054 INFO [StoreOpener-21afff1b290f2da5412b84c94e99f893-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 21afff1b290f2da5412b84c94e99f893 columnFamilyName f 2023-07-16 19:15:32,054 INFO [StoreOpener-105a27cb033e5a1828be55c43553c645-1] regionserver.HStore(310): Store=105a27cb033e5a1828be55c43553c645/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:32,054 INFO [StoreOpener-21afff1b290f2da5412b84c94e99f893-1] regionserver.HStore(310): Store=21afff1b290f2da5412b84c94e99f893/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:32,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/105a27cb033e5a1828be55c43553c645 2023-07-16 19:15:32,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/21afff1b290f2da5412b84c94e99f893 2023-07-16 19:15:32,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/105a27cb033e5a1828be55c43553c645 2023-07-16 19:15:32,056 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/21afff1b290f2da5412b84c94e99f893 2023-07-16 19:15:32,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 105a27cb033e5a1828be55c43553c645 2023-07-16 19:15:32,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 21afff1b290f2da5412b84c94e99f893 2023-07-16 19:15:32,060 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/105a27cb033e5a1828be55c43553c645/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:32,061 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 105a27cb033e5a1828be55c43553c645; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11722466560, jitterRate=0.09173977375030518}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:32,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 105a27cb033e5a1828be55c43553c645: 2023-07-16 19:15:32,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/21afff1b290f2da5412b84c94e99f893/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:32,062 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 21afff1b290f2da5412b84c94e99f893; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9701730240, jitterRate=-0.09645596146583557}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:32,062 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645., pid=139, masterSystemTime=1689534932039 2023-07-16 19:15:32,062 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 21afff1b290f2da5412b84c94e99f893: 2023-07-16 19:15:32,062 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893., pid=140, masterSystemTime=1689534932044 2023-07-16 19:15:32,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645. 2023-07-16 19:15:32,063 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645. 2023-07-16 19:15:32,063 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223. 2023-07-16 19:15:32,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 422d685570e91fa1ee17b3acdcc08223, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-16 19:15:32,064 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=105a27cb033e5a1828be55c43553c645, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:32,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 422d685570e91fa1ee17b3acdcc08223 2023-07-16 19:15:32,064 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689534931577.105a27cb033e5a1828be55c43553c645.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689534932064"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534932064"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534932064"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534932064"}]},"ts":"1689534932064"} 2023-07-16 19:15:32,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:32,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 422d685570e91fa1ee17b3acdcc08223 2023-07-16 19:15:32,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 422d685570e91fa1ee17b3acdcc08223 2023-07-16 19:15:32,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893. 2023-07-16 19:15:32,065 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893. 2023-07-16 19:15:32,065 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401. 2023-07-16 19:15:32,065 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=21afff1b290f2da5412b84c94e99f893, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:32,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7e6172bb2061c3f531f39b9e12c57401, NAME => 'Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-16 19:15:32,065 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689534932065"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534932065"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534932065"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534932065"}]},"ts":"1689534932065"} 2023-07-16 19:15:32,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 7e6172bb2061c3f531f39b9e12c57401 2023-07-16 19:15:32,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:32,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7e6172bb2061c3f531f39b9e12c57401 2023-07-16 19:15:32,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7e6172bb2061c3f531f39b9e12c57401 2023-07-16 19:15:32,066 INFO [StoreOpener-422d685570e91fa1ee17b3acdcc08223-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 422d685570e91fa1ee17b3acdcc08223 2023-07-16 19:15:32,067 INFO [StoreOpener-7e6172bb2061c3f531f39b9e12c57401-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7e6172bb2061c3f531f39b9e12c57401 2023-07-16 19:15:32,068 DEBUG [StoreOpener-422d685570e91fa1ee17b3acdcc08223-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/422d685570e91fa1ee17b3acdcc08223/f 2023-07-16 19:15:32,068 DEBUG [StoreOpener-422d685570e91fa1ee17b3acdcc08223-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/422d685570e91fa1ee17b3acdcc08223/f 2023-07-16 19:15:32,068 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=135 2023-07-16 19:15:32,068 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; OpenRegionProcedure 105a27cb033e5a1828be55c43553c645, server=jenkins-hbase4.apache.org,46561,1689534906430 in 178 msec 2023-07-16 19:15:32,068 INFO [StoreOpener-422d685570e91fa1ee17b3acdcc08223-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 422d685570e91fa1ee17b3acdcc08223 columnFamilyName f 2023-07-16 19:15:32,069 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=134 2023-07-16 19:15:32,069 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=134, state=SUCCESS; OpenRegionProcedure 21afff1b290f2da5412b84c94e99f893, server=jenkins-hbase4.apache.org,42201,1689534906603 in 175 msec 2023-07-16 19:15:32,069 DEBUG [StoreOpener-7e6172bb2061c3f531f39b9e12c57401-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/7e6172bb2061c3f531f39b9e12c57401/f 2023-07-16 19:15:32,069 DEBUG [StoreOpener-7e6172bb2061c3f531f39b9e12c57401-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/7e6172bb2061c3f531f39b9e12c57401/f 2023-07-16 19:15:32,069 INFO [StoreOpener-422d685570e91fa1ee17b3acdcc08223-1] regionserver.HStore(310): Store=422d685570e91fa1ee17b3acdcc08223/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:32,069 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=105a27cb033e5a1828be55c43553c645, ASSIGN in 341 msec 2023-07-16 19:15:32,070 INFO [StoreOpener-7e6172bb2061c3f531f39b9e12c57401-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7e6172bb2061c3f531f39b9e12c57401 columnFamilyName f 2023-07-16 19:15:32,070 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=21afff1b290f2da5412b84c94e99f893, ASSIGN in 342 msec 2023-07-16 19:15:32,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/422d685570e91fa1ee17b3acdcc08223 2023-07-16 19:15:32,070 INFO [StoreOpener-7e6172bb2061c3f531f39b9e12c57401-1] regionserver.HStore(310): Store=7e6172bb2061c3f531f39b9e12c57401/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:32,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/422d685570e91fa1ee17b3acdcc08223 2023-07-16 19:15:32,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/7e6172bb2061c3f531f39b9e12c57401 2023-07-16 19:15:32,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/7e6172bb2061c3f531f39b9e12c57401 2023-07-16 19:15:32,074 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 422d685570e91fa1ee17b3acdcc08223 2023-07-16 19:15:32,075 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7e6172bb2061c3f531f39b9e12c57401 2023-07-16 19:15:32,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/422d685570e91fa1ee17b3acdcc08223/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:32,077 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 422d685570e91fa1ee17b3acdcc08223; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11618710720, jitterRate=0.08207675814628601}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:32,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 422d685570e91fa1ee17b3acdcc08223: 2023-07-16 19:15:32,077 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223., pid=138, masterSystemTime=1689534932039 2023-07-16 19:15:32,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/7e6172bb2061c3f531f39b9e12c57401/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:32,079 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223. 2023-07-16 19:15:32,079 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223. 2023-07-16 19:15:32,079 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7e6172bb2061c3f531f39b9e12c57401; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9746448640, jitterRate=-0.09229123592376709}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:32,079 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7e6172bb2061c3f531f39b9e12c57401: 2023-07-16 19:15:32,079 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=422d685570e91fa1ee17b3acdcc08223, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:32,079 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689534932079"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534932079"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534932079"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534932079"}]},"ts":"1689534932079"} 2023-07-16 19:15:32,081 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401., pid=142, masterSystemTime=1689534932044 2023-07-16 19:15:32,084 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401. 2023-07-16 19:15:32,084 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401. 2023-07-16 19:15:32,084 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7. 2023-07-16 19:15:32,085 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=7e6172bb2061c3f531f39b9e12c57401, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:32,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 85b368c440f00aab23f334328127a6e7, NAME => 'Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-16 19:15:32,085 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689534932085"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534932085"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534932085"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534932085"}]},"ts":"1689534932085"} 2023-07-16 19:15:32,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 85b368c440f00aab23f334328127a6e7 2023-07-16 19:15:32,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:32,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 85b368c440f00aab23f334328127a6e7 2023-07-16 19:15:32,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 85b368c440f00aab23f334328127a6e7 2023-07-16 19:15:32,087 INFO [StoreOpener-85b368c440f00aab23f334328127a6e7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 85b368c440f00aab23f334328127a6e7 2023-07-16 19:15:32,088 DEBUG [StoreOpener-85b368c440f00aab23f334328127a6e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/85b368c440f00aab23f334328127a6e7/f 2023-07-16 19:15:32,088 DEBUG [StoreOpener-85b368c440f00aab23f334328127a6e7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/85b368c440f00aab23f334328127a6e7/f 2023-07-16 19:15:32,089 INFO [StoreOpener-85b368c440f00aab23f334328127a6e7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 85b368c440f00aab23f334328127a6e7 columnFamilyName f 2023-07-16 19:15:32,090 INFO [StoreOpener-85b368c440f00aab23f334328127a6e7-1] regionserver.HStore(310): Store=85b368c440f00aab23f334328127a6e7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:32,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/85b368c440f00aab23f334328127a6e7 2023-07-16 19:15:32,093 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=136 2023-07-16 19:15:32,093 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=136, state=SUCCESS; OpenRegionProcedure 422d685570e91fa1ee17b3acdcc08223, server=jenkins-hbase4.apache.org,46561,1689534906430 in 204 msec 2023-07-16 19:15:32,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/85b368c440f00aab23f334328127a6e7 2023-07-16 19:15:32,096 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 85b368c440f00aab23f334328127a6e7 2023-07-16 19:15:32,097 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=422d685570e91fa1ee17b3acdcc08223, ASSIGN in 367 msec 2023-07-16 19:15:32,098 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=133 2023-07-16 19:15:32,098 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=133, state=SUCCESS; OpenRegionProcedure 7e6172bb2061c3f531f39b9e12c57401, server=jenkins-hbase4.apache.org,42201,1689534906603 in 201 msec 2023-07-16 19:15:32,100 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7e6172bb2061c3f531f39b9e12c57401, ASSIGN in 371 msec 2023-07-16 19:15:32,100 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/85b368c440f00aab23f334328127a6e7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:32,100 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 85b368c440f00aab23f334328127a6e7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11089279520, jitterRate=0.03276963531970978}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:32,100 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 85b368c440f00aab23f334328127a6e7: 2023-07-16 19:15:32,101 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7., pid=141, masterSystemTime=1689534932044 2023-07-16 19:15:32,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7. 2023-07-16 19:15:32,103 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7. 2023-07-16 19:15:32,103 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=85b368c440f00aab23f334328127a6e7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:32,103 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689534932103"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534932103"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534932103"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534932103"}]},"ts":"1689534932103"} 2023-07-16 19:15:32,106 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=137 2023-07-16 19:15:32,107 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=137, state=SUCCESS; OpenRegionProcedure 85b368c440f00aab23f334328127a6e7, server=jenkins-hbase4.apache.org,42201,1689534906603 in 212 msec 2023-07-16 19:15:32,109 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=132 2023-07-16 19:15:32,109 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=85b368c440f00aab23f334328127a6e7, ASSIGN in 380 msec 2023-07-16 19:15:32,110 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 19:15:32,110 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534932110"}]},"ts":"1689534932110"} 2023-07-16 19:15:32,111 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-16 19:15:32,114 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 19:15:32,116 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 537 msec 2023-07-16 19:15:32,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-16 19:15:32,187 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-16 19:15:32,187 DEBUG [Listener at localhost/36799] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-16 19:15:32,187 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:32,191 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-16 19:15:32,191 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:32,191 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-16 19:15:32,192 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:32,199 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-16 19:15:32,199 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:32,200 INFO [Listener at localhost/36799] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-16 19:15:32,200 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-16 19:15:32,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-16 19:15:32,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-16 19:15:32,205 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534932205"}]},"ts":"1689534932205"} 2023-07-16 19:15:32,206 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-16 19:15:32,208 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-16 19:15:32,209 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7e6172bb2061c3f531f39b9e12c57401, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=21afff1b290f2da5412b84c94e99f893, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=105a27cb033e5a1828be55c43553c645, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=422d685570e91fa1ee17b3acdcc08223, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=85b368c440f00aab23f334328127a6e7, UNASSIGN}] 2023-07-16 19:15:32,213 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=422d685570e91fa1ee17b3acdcc08223, UNASSIGN 2023-07-16 19:15:32,213 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=105a27cb033e5a1828be55c43553c645, UNASSIGN 2023-07-16 19:15:32,213 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7e6172bb2061c3f531f39b9e12c57401, UNASSIGN 2023-07-16 19:15:32,214 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=21afff1b290f2da5412b84c94e99f893, UNASSIGN 2023-07-16 19:15:32,214 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=85b368c440f00aab23f334328127a6e7, UNASSIGN 2023-07-16 19:15:32,214 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=105a27cb033e5a1828be55c43553c645, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:32,214 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=422d685570e91fa1ee17b3acdcc08223, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:32,215 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689534931577.105a27cb033e5a1828be55c43553c645.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689534932214"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534932214"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534932214"}]},"ts":"1689534932214"} 2023-07-16 19:15:32,214 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=21afff1b290f2da5412b84c94e99f893, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:32,214 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=7e6172bb2061c3f531f39b9e12c57401, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:32,215 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689534932214"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534932214"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534932214"}]},"ts":"1689534932214"} 2023-07-16 19:15:32,215 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689534932214"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534932214"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534932214"}]},"ts":"1689534932214"} 2023-07-16 19:15:32,215 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689534932214"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534932214"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534932214"}]},"ts":"1689534932214"} 2023-07-16 19:15:32,215 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=85b368c440f00aab23f334328127a6e7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:32,215 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689534932215"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534932215"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534932215"}]},"ts":"1689534932215"} 2023-07-16 19:15:32,216 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=146, state=RUNNABLE; CloseRegionProcedure 105a27cb033e5a1828be55c43553c645, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:32,217 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=145, state=RUNNABLE; CloseRegionProcedure 21afff1b290f2da5412b84c94e99f893, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:32,218 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=144, state=RUNNABLE; CloseRegionProcedure 7e6172bb2061c3f531f39b9e12c57401, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:32,219 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=147, state=RUNNABLE; CloseRegionProcedure 422d685570e91fa1ee17b3acdcc08223, server=jenkins-hbase4.apache.org,46561,1689534906430}] 2023-07-16 19:15:32,220 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=148, state=RUNNABLE; CloseRegionProcedure 85b368c440f00aab23f334328127a6e7, server=jenkins-hbase4.apache.org,42201,1689534906603}] 2023-07-16 19:15:32,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-16 19:15:32,370 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 105a27cb033e5a1828be55c43553c645 2023-07-16 19:15:32,371 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 105a27cb033e5a1828be55c43553c645, disabling compactions & flushes 2023-07-16 19:15:32,372 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645. 2023-07-16 19:15:32,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645. 2023-07-16 19:15:32,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645. after waiting 0 ms 2023-07-16 19:15:32,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645. 2023-07-16 19:15:32,372 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 21afff1b290f2da5412b84c94e99f893 2023-07-16 19:15:32,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 21afff1b290f2da5412b84c94e99f893, disabling compactions & flushes 2023-07-16 19:15:32,376 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893. 2023-07-16 19:15:32,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893. 2023-07-16 19:15:32,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893. after waiting 0 ms 2023-07-16 19:15:32,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893. 2023-07-16 19:15:32,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/105a27cb033e5a1828be55c43553c645/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:32,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645. 2023-07-16 19:15:32,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 105a27cb033e5a1828be55c43553c645: 2023-07-16 19:15:32,387 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 105a27cb033e5a1828be55c43553c645 2023-07-16 19:15:32,387 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 422d685570e91fa1ee17b3acdcc08223 2023-07-16 19:15:32,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 422d685570e91fa1ee17b3acdcc08223, disabling compactions & flushes 2023-07-16 19:15:32,388 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223. 2023-07-16 19:15:32,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223. 2023-07-16 19:15:32,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223. after waiting 0 ms 2023-07-16 19:15:32,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223. 2023-07-16 19:15:32,389 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=105a27cb033e5a1828be55c43553c645, regionState=CLOSED 2023-07-16 19:15:32,389 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689534931577.105a27cb033e5a1828be55c43553c645.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689534932389"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534932389"}]},"ts":"1689534932389"} 2023-07-16 19:15:32,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/21afff1b290f2da5412b84c94e99f893/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:32,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893. 2023-07-16 19:15:32,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 21afff1b290f2da5412b84c94e99f893: 2023-07-16 19:15:32,392 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 21afff1b290f2da5412b84c94e99f893 2023-07-16 19:15:32,393 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 85b368c440f00aab23f334328127a6e7 2023-07-16 19:15:32,393 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=21afff1b290f2da5412b84c94e99f893, regionState=CLOSED 2023-07-16 19:15:32,394 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689534932393"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534932393"}]},"ts":"1689534932393"} 2023-07-16 19:15:32,394 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=146 2023-07-16 19:15:32,394 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=146, state=SUCCESS; CloseRegionProcedure 105a27cb033e5a1828be55c43553c645, server=jenkins-hbase4.apache.org,46561,1689534906430 in 175 msec 2023-07-16 19:15:32,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 85b368c440f00aab23f334328127a6e7, disabling compactions & flushes 2023-07-16 19:15:32,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7. 2023-07-16 19:15:32,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7. 2023-07-16 19:15:32,400 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=105a27cb033e5a1828be55c43553c645, UNASSIGN in 186 msec 2023-07-16 19:15:32,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7. after waiting 0 ms 2023-07-16 19:15:32,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7. 2023-07-16 19:15:32,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/422d685570e91fa1ee17b3acdcc08223/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:32,402 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=145 2023-07-16 19:15:32,402 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=145, state=SUCCESS; CloseRegionProcedure 21afff1b290f2da5412b84c94e99f893, server=jenkins-hbase4.apache.org,42201,1689534906603 in 178 msec 2023-07-16 19:15:32,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223. 2023-07-16 19:15:32,403 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 422d685570e91fa1ee17b3acdcc08223: 2023-07-16 19:15:32,407 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 422d685570e91fa1ee17b3acdcc08223 2023-07-16 19:15:32,408 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=21afff1b290f2da5412b84c94e99f893, UNASSIGN in 193 msec 2023-07-16 19:15:32,408 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=422d685570e91fa1ee17b3acdcc08223, regionState=CLOSED 2023-07-16 19:15:32,408 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689534932408"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534932408"}]},"ts":"1689534932408"} 2023-07-16 19:15:32,409 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/85b368c440f00aab23f334328127a6e7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:32,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7. 2023-07-16 19:15:32,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 85b368c440f00aab23f334328127a6e7: 2023-07-16 19:15:32,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 85b368c440f00aab23f334328127a6e7 2023-07-16 19:15:32,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7e6172bb2061c3f531f39b9e12c57401 2023-07-16 19:15:32,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7e6172bb2061c3f531f39b9e12c57401, disabling compactions & flushes 2023-07-16 19:15:32,417 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401. 2023-07-16 19:15:32,417 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=147 2023-07-16 19:15:32,417 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401. 2023-07-16 19:15:32,417 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=147, state=SUCCESS; CloseRegionProcedure 422d685570e91fa1ee17b3acdcc08223, server=jenkins-hbase4.apache.org,46561,1689534906430 in 191 msec 2023-07-16 19:15:32,417 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401. after waiting 0 ms 2023-07-16 19:15:32,417 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401. 2023-07-16 19:15:32,417 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=85b368c440f00aab23f334328127a6e7, regionState=CLOSED 2023-07-16 19:15:32,417 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689534932417"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534932417"}]},"ts":"1689534932417"} 2023-07-16 19:15:32,419 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=422d685570e91fa1ee17b3acdcc08223, UNASSIGN in 208 msec 2023-07-16 19:15:32,422 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=148 2023-07-16 19:15:32,422 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=148, state=SUCCESS; CloseRegionProcedure 85b368c440f00aab23f334328127a6e7, server=jenkins-hbase4.apache.org,42201,1689534906603 in 199 msec 2023-07-16 19:15:32,424 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=85b368c440f00aab23f334328127a6e7, UNASSIGN in 213 msec 2023-07-16 19:15:32,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/Group_testDisabledTableMove/7e6172bb2061c3f531f39b9e12c57401/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:32,436 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401. 2023-07-16 19:15:32,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7e6172bb2061c3f531f39b9e12c57401: 2023-07-16 19:15:32,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7e6172bb2061c3f531f39b9e12c57401 2023-07-16 19:15:32,438 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=7e6172bb2061c3f531f39b9e12c57401, regionState=CLOSED 2023-07-16 19:15:32,438 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689534932438"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534932438"}]},"ts":"1689534932438"} 2023-07-16 19:15:32,441 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=144 2023-07-16 19:15:32,441 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=144, state=SUCCESS; CloseRegionProcedure 7e6172bb2061c3f531f39b9e12c57401, server=jenkins-hbase4.apache.org,42201,1689534906603 in 221 msec 2023-07-16 19:15:32,442 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=143 2023-07-16 19:15:32,442 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=7e6172bb2061c3f531f39b9e12c57401, UNASSIGN in 232 msec 2023-07-16 19:15:32,443 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534932443"}]},"ts":"1689534932443"} 2023-07-16 19:15:32,444 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-16 19:15:32,445 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-16 19:15:32,447 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 246 msec 2023-07-16 19:15:32,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-16 19:15:32,507 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-16 19:15:32,507 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_498383658 2023-07-16 19:15:32,509 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_498383658 2023-07-16 19:15:32,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:32,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_498383658 2023-07-16 19:15:32,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:32,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:32,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-16 19:15:32,514 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_498383658, current retry=0 2023-07-16 19:15:32,514 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_498383658. 2023-07-16 19:15:32,514 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:32,517 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:32,517 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:32,519 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-16 19:15:32,519 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:32,521 INFO [Listener at localhost/36799] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-16 19:15:32,522 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-16 19:15:32,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:32,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 921 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:41906 deadline: 1689534992522, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-16 19:15:32,523 DEBUG [Listener at localhost/36799] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-16 19:15:32,523 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-16 19:15:32,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 19:15:32,527 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 19:15:32,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_498383658' 2023-07-16 19:15:32,528 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 19:15:32,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:32,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_498383658 2023-07-16 19:15:32,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:32,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:32,537 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/7e6172bb2061c3f531f39b9e12c57401 2023-07-16 19:15:32,537 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/422d685570e91fa1ee17b3acdcc08223 2023-07-16 19:15:32,537 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/85b368c440f00aab23f334328127a6e7 2023-07-16 19:15:32,537 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/105a27cb033e5a1828be55c43553c645 2023-07-16 19:15:32,537 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/21afff1b290f2da5412b84c94e99f893 2023-07-16 19:15:32,540 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/422d685570e91fa1ee17b3acdcc08223/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/422d685570e91fa1ee17b3acdcc08223/recovered.edits] 2023-07-16 19:15:32,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-16 19:15:32,541 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/21afff1b290f2da5412b84c94e99f893/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/21afff1b290f2da5412b84c94e99f893/recovered.edits] 2023-07-16 19:15:32,541 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/105a27cb033e5a1828be55c43553c645/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/105a27cb033e5a1828be55c43553c645/recovered.edits] 2023-07-16 19:15:32,542 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/85b368c440f00aab23f334328127a6e7/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/85b368c440f00aab23f334328127a6e7/recovered.edits] 2023-07-16 19:15:32,542 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/7e6172bb2061c3f531f39b9e12c57401/f, FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/7e6172bb2061c3f531f39b9e12c57401/recovered.edits] 2023-07-16 19:15:32,559 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/422d685570e91fa1ee17b3acdcc08223/recovered.edits/4.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/Group_testDisabledTableMove/422d685570e91fa1ee17b3acdcc08223/recovered.edits/4.seqid 2023-07-16 19:15:32,561 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/7e6172bb2061c3f531f39b9e12c57401/recovered.edits/4.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/Group_testDisabledTableMove/7e6172bb2061c3f531f39b9e12c57401/recovered.edits/4.seqid 2023-07-16 19:15:32,561 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/422d685570e91fa1ee17b3acdcc08223 2023-07-16 19:15:32,562 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/21afff1b290f2da5412b84c94e99f893/recovered.edits/4.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/Group_testDisabledTableMove/21afff1b290f2da5412b84c94e99f893/recovered.edits/4.seqid 2023-07-16 19:15:32,562 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/7e6172bb2061c3f531f39b9e12c57401 2023-07-16 19:15:32,562 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/105a27cb033e5a1828be55c43553c645/recovered.edits/4.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/Group_testDisabledTableMove/105a27cb033e5a1828be55c43553c645/recovered.edits/4.seqid 2023-07-16 19:15:32,563 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/21afff1b290f2da5412b84c94e99f893 2023-07-16 19:15:32,563 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/105a27cb033e5a1828be55c43553c645 2023-07-16 19:15:32,563 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/85b368c440f00aab23f334328127a6e7/recovered.edits/4.seqid to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/archive/data/default/Group_testDisabledTableMove/85b368c440f00aab23f334328127a6e7/recovered.edits/4.seqid 2023-07-16 19:15:32,564 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/.tmp/data/default/Group_testDisabledTableMove/85b368c440f00aab23f334328127a6e7 2023-07-16 19:15:32,564 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-16 19:15:32,567 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 19:15:32,571 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-16 19:15:32,579 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-16 19:15:32,581 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 19:15:32,581 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-16 19:15:32,581 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534932581"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:32,581 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534932581"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:32,581 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689534931577.105a27cb033e5a1828be55c43553c645.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534932581"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:32,581 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534932581"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:32,581 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534932581"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:32,583 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-16 19:15:32,583 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 7e6172bb2061c3f531f39b9e12c57401, NAME => 'Group_testDisabledTableMove,,1689534931577.7e6172bb2061c3f531f39b9e12c57401.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 21afff1b290f2da5412b84c94e99f893, NAME => 'Group_testDisabledTableMove,aaaaa,1689534931577.21afff1b290f2da5412b84c94e99f893.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 105a27cb033e5a1828be55c43553c645, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689534931577.105a27cb033e5a1828be55c43553c645.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 422d685570e91fa1ee17b3acdcc08223, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689534931577.422d685570e91fa1ee17b3acdcc08223.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 85b368c440f00aab23f334328127a6e7, NAME => 'Group_testDisabledTableMove,zzzzz,1689534931577.85b368c440f00aab23f334328127a6e7.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-16 19:15:32,583 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-16 19:15:32,584 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689534932583"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:32,585 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-16 19:15:32,587 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 19:15:32,589 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 63 msec 2023-07-16 19:15:32,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-16 19:15:32,642 INFO [Listener at localhost/36799] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-16 19:15:32,644 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:32,645 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:32,645 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:32,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:32,646 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:32,646 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881] to rsgroup default 2023-07-16 19:15:32,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:32,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_498383658 2023-07-16 19:15:32,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:32,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:32,651 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_498383658, current retry=0 2023-07-16 19:15:32,651 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,35369,1689534910605, jenkins-hbase4.apache.org,37881,1689534906681] are moved back to Group_testDisabledTableMove_498383658 2023-07-16 19:15:32,651 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_498383658 => default 2023-07-16 19:15:32,651 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:32,652 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_498383658 2023-07-16 19:15:32,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:32,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:32,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 19:15:32,657 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:32,658 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:32,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:32,658 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:32,658 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:32,658 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:32,659 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:32,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:32,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:32,664 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:32,666 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:32,667 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:32,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:32,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:32,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:32,671 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:32,673 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:32,673 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:32,675 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:32,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:32,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 955 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41906 deadline: 1689536132675, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:32,675 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:32,677 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:32,678 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:32,678 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:32,678 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:32,678 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:32,679 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:32,698 INFO [Listener at localhost/36799] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=514 (was 512) Potentially hanging thread: hconnection-0x62be270e-shared-pool-22 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-941949956_17 at /127.0.0.1:60228 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1146923140_17 at /127.0.0.1:55842 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xfdeaa0f-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=791 (was 770) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=482 (was 482), ProcessCount=172 (was 172), AvailableMemoryMB=2729 (was 2730) 2023-07-16 19:15:32,699 WARN [Listener at localhost/36799] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-16 19:15:32,709 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-16 19:15:32,710 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-16 19:15:32,724 INFO [Listener at localhost/36799] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=514, OpenFileDescriptor=791, MaxFileDescriptor=60000, SystemLoadAverage=482, ProcessCount=172, AvailableMemoryMB=2729 2023-07-16 19:15:32,724 WARN [Listener at localhost/36799] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-16 19:15:32,724 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-16 19:15:32,729 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:32,730 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:32,730 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:32,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:32,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:32,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:32,732 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:32,732 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:32,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:32,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:32,738 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:32,741 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:32,741 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:32,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:32,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:32,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:32,750 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:32,752 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:32,752 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:32,754 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38143] to rsgroup master 2023-07-16 19:15:32,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:32,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] ipc.CallRunner(144): callId: 983 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:41906 deadline: 1689536132754, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. 2023-07-16 19:15:32,754 WARN [Listener at localhost/36799] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38143 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:32,756 INFO [Listener at localhost/36799] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:32,756 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:32,756 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:32,757 INFO [Listener at localhost/36799] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35369, jenkins-hbase4.apache.org:37881, jenkins-hbase4.apache.org:42201, jenkins-hbase4.apache.org:46561], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:32,757 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:32,757 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38143] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:32,758 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-16 19:15:32,758 INFO [Listener at localhost/36799] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-16 19:15:32,758 DEBUG [Listener at localhost/36799] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x083f3c49 to 127.0.0.1:50949 2023-07-16 19:15:32,758 DEBUG [Listener at localhost/36799] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:32,759 DEBUG [Listener at localhost/36799] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-16 19:15:32,759 DEBUG [Listener at localhost/36799] util.JVMClusterUtil(257): Found active master hash=973291378, stopped=false 2023-07-16 19:15:32,760 DEBUG [Listener at localhost/36799] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 19:15:32,760 DEBUG [Listener at localhost/36799] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 19:15:32,760 INFO [Listener at localhost/36799] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,38143,1689534904450 2023-07-16 19:15:32,767 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:32,767 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:32,767 INFO [Listener at localhost/36799] procedure2.ProcedureExecutor(629): Stopping 2023-07-16 19:15:32,767 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:32,767 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:32,767 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:35369-0x1016f8f37ae000b, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:32,767 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:32,768 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:32,768 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:32,768 DEBUG [Listener at localhost/36799] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1ee633ed to 127.0.0.1:50949 2023-07-16 19:15:32,768 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:32,768 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35369-0x1016f8f37ae000b, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:32,768 DEBUG [Listener at localhost/36799] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:32,768 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:32,769 INFO [Listener at localhost/36799] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46561,1689534906430' ***** 2023-07-16 19:15:32,769 INFO [Listener at localhost/36799] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 19:15:32,769 INFO [Listener at localhost/36799] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42201,1689534906603' ***** 2023-07-16 19:15:32,769 INFO [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 19:15:32,769 INFO [Listener at localhost/36799] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 19:15:32,769 INFO [Listener at localhost/36799] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37881,1689534906681' ***** 2023-07-16 19:15:32,769 INFO [Listener at localhost/36799] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 19:15:32,769 INFO [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 19:15:32,769 INFO [Listener at localhost/36799] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35369,1689534910605' ***** 2023-07-16 19:15:32,770 INFO [Listener at localhost/36799] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 19:15:32,771 INFO [RS:2;jenkins-hbase4:37881] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 19:15:32,771 INFO [RS:3;jenkins-hbase4:35369] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 19:15:32,788 INFO [RS:0;jenkins-hbase4:46561] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1364e664{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:32,788 INFO [RS:1;jenkins-hbase4:42201] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2d178a1{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:32,788 INFO [RS:3;jenkins-hbase4:35369] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@31489fd5{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:32,788 INFO [RS:2;jenkins-hbase4:37881] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@36a7cf96{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:32,792 INFO [RS:2;jenkins-hbase4:37881] server.AbstractConnector(383): Stopped ServerConnector@1266d143{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:32,792 INFO [RS:3;jenkins-hbase4:35369] server.AbstractConnector(383): Stopped ServerConnector@6626147f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:32,792 INFO [RS:2;jenkins-hbase4:37881] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 19:15:32,792 INFO [RS:3;jenkins-hbase4:35369] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 19:15:32,792 INFO [RS:0;jenkins-hbase4:46561] server.AbstractConnector(383): Stopped ServerConnector@6fc105c0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:32,792 INFO [RS:1;jenkins-hbase4:42201] server.AbstractConnector(383): Stopped ServerConnector@4ccea9bd{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:32,793 INFO [RS:0;jenkins-hbase4:46561] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 19:15:32,793 INFO [RS:2;jenkins-hbase4:37881] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5e2294fa{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 19:15:32,793 INFO [RS:1;jenkins-hbase4:42201] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 19:15:32,795 INFO [RS:3;jenkins-hbase4:35369] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2a8106ca{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 19:15:32,795 INFO [RS:0;jenkins-hbase4:46561] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@623f7cf4{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 19:15:32,795 INFO [RS:1;jenkins-hbase4:42201] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39d35c00{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 19:15:32,795 INFO [RS:2;jenkins-hbase4:37881] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5a76cee2{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/hadoop.log.dir/,STOPPED} 2023-07-16 19:15:32,796 INFO [RS:0;jenkins-hbase4:46561] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4784b602{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/hadoop.log.dir/,STOPPED} 2023-07-16 19:15:32,796 INFO [RS:3;jenkins-hbase4:35369] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3e42e83e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/hadoop.log.dir/,STOPPED} 2023-07-16 19:15:32,797 INFO [RS:1;jenkins-hbase4:42201] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1ab64a2f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/hadoop.log.dir/,STOPPED} 2023-07-16 19:15:32,799 INFO [RS:2;jenkins-hbase4:37881] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 19:15:32,800 INFO [RS:2;jenkins-hbase4:37881] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 19:15:32,800 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 19:15:32,800 INFO [RS:2;jenkins-hbase4:37881] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 19:15:32,800 INFO [RS:2;jenkins-hbase4:37881] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:32,800 DEBUG [RS:2;jenkins-hbase4:37881] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x00bf6c8b to 127.0.0.1:50949 2023-07-16 19:15:32,800 DEBUG [RS:2;jenkins-hbase4:37881] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:32,800 INFO [RS:2;jenkins-hbase4:37881] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37881,1689534906681; all regions closed. 2023-07-16 19:15:32,807 INFO [RS:3;jenkins-hbase4:35369] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 19:15:32,807 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 19:15:32,808 INFO [RS:1;jenkins-hbase4:42201] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 19:15:32,808 INFO [RS:3;jenkins-hbase4:35369] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 19:15:32,808 INFO [RS:3;jenkins-hbase4:35369] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 19:15:32,808 INFO [RS:3;jenkins-hbase4:35369] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:32,808 DEBUG [RS:3;jenkins-hbase4:35369] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x170d7385 to 127.0.0.1:50949 2023-07-16 19:15:32,808 DEBUG [RS:3;jenkins-hbase4:35369] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:32,808 INFO [RS:3;jenkins-hbase4:35369] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35369,1689534910605; all regions closed. 2023-07-16 19:15:32,809 INFO [RS:0;jenkins-hbase4:46561] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 19:15:32,809 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 19:15:32,809 INFO [RS:0;jenkins-hbase4:46561] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 19:15:32,809 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 19:15:32,809 INFO [RS:1;jenkins-hbase4:42201] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 19:15:32,809 INFO [RS:1;jenkins-hbase4:42201] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 19:15:32,809 INFO [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(3305): Received CLOSE for da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:32,810 INFO [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:32,810 DEBUG [RS:1;jenkins-hbase4:42201] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0ca7625d to 127.0.0.1:50949 2023-07-16 19:15:32,810 DEBUG [RS:1;jenkins-hbase4:42201] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:32,810 INFO [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 19:15:32,810 DEBUG [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(1478): Online Regions={da6797ffc5b2850bae13a1d0baad6804=testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804.} 2023-07-16 19:15:32,811 DEBUG [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(1504): Waiting on da6797ffc5b2850bae13a1d0baad6804 2023-07-16 19:15:32,809 INFO [RS:0;jenkins-hbase4:46561] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 19:15:32,811 INFO [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(3305): Received CLOSE for 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:32,812 INFO [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(3305): Received CLOSE for 2635ffdc96eb53d27ddc03fa25e81955 2023-07-16 19:15:32,812 INFO [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(3305): Received CLOSE for 34e2a05c74ec47ec61d0b84dc3cec19b 2023-07-16 19:15:32,812 INFO [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:32,812 DEBUG [RS:0;jenkins-hbase4:46561] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x235f4678 to 127.0.0.1:50949 2023-07-16 19:15:32,812 DEBUG [RS:0;jenkins-hbase4:46561] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:32,812 INFO [RS:0;jenkins-hbase4:46561] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 19:15:32,812 INFO [RS:0;jenkins-hbase4:46561] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 19:15:32,812 INFO [RS:0;jenkins-hbase4:46561] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 19:15:32,812 INFO [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-16 19:15:32,823 INFO [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-16 19:15:32,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 39dcd59af26924a92354f901d71240d5, disabling compactions & flushes 2023-07-16 19:15:32,824 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:32,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:32,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. after waiting 0 ms 2023-07-16 19:15:32,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:32,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing da6797ffc5b2850bae13a1d0baad6804, disabling compactions & flushes 2023-07-16 19:15:32,824 DEBUG [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(1478): Online Regions={39dcd59af26924a92354f901d71240d5=unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5., 2635ffdc96eb53d27ddc03fa25e81955=hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955., 1588230740=hbase:meta,,1.1588230740, 34e2a05c74ec47ec61d0b84dc3cec19b=hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b.} 2023-07-16 19:15:32,824 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:32,824 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:32,825 DEBUG [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(1504): Waiting on 1588230740, 2635ffdc96eb53d27ddc03fa25e81955, 34e2a05c74ec47ec61d0b84dc3cec19b, 39dcd59af26924a92354f901d71240d5 2023-07-16 19:15:32,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. after waiting 0 ms 2023-07-16 19:15:32,825 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:32,826 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 19:15:32,826 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 19:15:32,826 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 19:15:32,826 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 19:15:32,826 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 19:15:32,826 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=76.22 KB heapSize=119.91 KB 2023-07-16 19:15:32,844 DEBUG [RS:2;jenkins-hbase4:37881] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/oldWALs 2023-07-16 19:15:32,845 INFO [RS:2;jenkins-hbase4:37881] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37881%2C1689534906681.meta:.meta(num 1689534908962) 2023-07-16 19:15:32,845 DEBUG [RS:3;jenkins-hbase4:35369] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/oldWALs 2023-07-16 19:15:32,845 INFO [RS:3;jenkins-hbase4:35369] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35369%2C1689534910605:(num 1689534911035) 2023-07-16 19:15:32,845 DEBUG [RS:3;jenkins-hbase4:35369] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:32,845 INFO [RS:3;jenkins-hbase4:35369] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:32,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/unmovedTable/39dcd59af26924a92354f901d71240d5/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-16 19:15:32,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:32,852 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 39dcd59af26924a92354f901d71240d5: 2023-07-16 19:15:32,852 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689534927603.39dcd59af26924a92354f901d71240d5. 2023-07-16 19:15:32,852 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2635ffdc96eb53d27ddc03fa25e81955, disabling compactions & flushes 2023-07-16 19:15:32,852 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:32,852 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:32,852 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. after waiting 0 ms 2023-07-16 19:15:32,852 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:32,852 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 2635ffdc96eb53d27ddc03fa25e81955 1/1 column families, dataSize=27.07 KB heapSize=44.69 KB 2023-07-16 19:15:32,853 INFO [RS:3;jenkins-hbase4:35369] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 19:15:32,859 INFO [RS:3;jenkins-hbase4:35369] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 19:15:32,859 INFO [RS:3;jenkins-hbase4:35369] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 19:15:32,859 INFO [RS:3;jenkins-hbase4:35369] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 19:15:32,860 INFO [RS:3;jenkins-hbase4:35369] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35369 2023-07-16 19:15:32,861 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:32,861 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:32,862 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:32,862 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:32,868 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 19:15:32,881 DEBUG [RS:2;jenkins-hbase4:37881] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/oldWALs 2023-07-16 19:15:32,881 INFO [RS:2;jenkins-hbase4:37881] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37881%2C1689534906681:(num 1689534908699) 2023-07-16 19:15:32,881 DEBUG [RS:2;jenkins-hbase4:37881] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:32,881 INFO [RS:2;jenkins-hbase4:37881] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:32,886 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/default/testRename/da6797ffc5b2850bae13a1d0baad6804/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-16 19:15:32,887 INFO [RS:2;jenkins-hbase4:37881] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 19:15:32,888 INFO [RS:2;jenkins-hbase4:37881] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 19:15:32,888 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 19:15:32,888 INFO [RS:2;jenkins-hbase4:37881] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 19:15:32,888 INFO [RS:2;jenkins-hbase4:37881] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 19:15:32,888 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:32,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for da6797ffc5b2850bae13a1d0baad6804: 2023-07-16 19:15:32,890 INFO [RS:2;jenkins-hbase4:37881] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37881 2023-07-16 19:15:32,890 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689534925918.da6797ffc5b2850bae13a1d0baad6804. 2023-07-16 19:15:32,895 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=70.42 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/.tmp/info/97ee23550d2d400baa04f56f0f135ec6 2023-07-16 19:15:32,901 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 97ee23550d2d400baa04f56f0f135ec6 2023-07-16 19:15:32,907 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.07 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955/.tmp/m/307dd2f123e24585874c522289bfc34f 2023-07-16 19:15:32,913 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:32,913 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:32,913 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:32,913 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:32,913 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:32,913 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:32,913 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:32,913 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:32,914 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:32,914 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:32,913 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:35369-0x1016f8f37ae000b, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35369,1689534910605 2023-07-16 19:15:32,914 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:35369-0x1016f8f37ae000b, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:32,914 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:35369-0x1016f8f37ae000b, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37881,1689534906681 2023-07-16 19:15:32,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 307dd2f123e24585874c522289bfc34f 2023-07-16 19:15:32,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955/.tmp/m/307dd2f123e24585874c522289bfc34f as hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955/m/307dd2f123e24585874c522289bfc34f 2023-07-16 19:15:32,923 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 307dd2f123e24585874c522289bfc34f 2023-07-16 19:15:32,924 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955/m/307dd2f123e24585874c522289bfc34f, entries=28, sequenceid=101, filesize=6.1 K 2023-07-16 19:15:32,925 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~27.07 KB/27718, heapSize ~44.67 KB/45744, currentSize=0 B/0 for 2635ffdc96eb53d27ddc03fa25e81955 in 73ms, sequenceid=101, compaction requested=false 2023-07-16 19:15:32,941 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/.tmp/rep_barrier/ad213e7dbb8d4c2ca4070c748277feef 2023-07-16 19:15:32,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/rsgroup/2635ffdc96eb53d27ddc03fa25e81955/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=12 2023-07-16 19:15:32,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 19:15:32,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:32,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2635ffdc96eb53d27ddc03fa25e81955: 2023-07-16 19:15:32,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689534909336.2635ffdc96eb53d27ddc03fa25e81955. 2023-07-16 19:15:32,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 34e2a05c74ec47ec61d0b84dc3cec19b, disabling compactions & flushes 2023-07-16 19:15:32,947 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:32,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:32,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. after waiting 0 ms 2023-07-16 19:15:32,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:32,952 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ad213e7dbb8d4c2ca4070c748277feef 2023-07-16 19:15:32,954 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/namespace/34e2a05c74ec47ec61d0b84dc3cec19b/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-16 19:15:32,956 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:32,956 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 34e2a05c74ec47ec61d0b84dc3cec19b: 2023-07-16 19:15:32,956 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689534909231.34e2a05c74ec47ec61d0b84dc3cec19b. 2023-07-16 19:15:32,969 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.81 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/.tmp/table/68604c1bec0641aa9f8482f065942829 2023-07-16 19:15:32,976 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 68604c1bec0641aa9f8482f065942829 2023-07-16 19:15:32,977 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/.tmp/info/97ee23550d2d400baa04f56f0f135ec6 as hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/info/97ee23550d2d400baa04f56f0f135ec6 2023-07-16 19:15:32,983 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 97ee23550d2d400baa04f56f0f135ec6 2023-07-16 19:15:32,983 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/info/97ee23550d2d400baa04f56f0f135ec6, entries=92, sequenceid=210, filesize=15.3 K 2023-07-16 19:15:32,984 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/.tmp/rep_barrier/ad213e7dbb8d4c2ca4070c748277feef as hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/rep_barrier/ad213e7dbb8d4c2ca4070c748277feef 2023-07-16 19:15:32,991 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ad213e7dbb8d4c2ca4070c748277feef 2023-07-16 19:15:32,991 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/rep_barrier/ad213e7dbb8d4c2ca4070c748277feef, entries=18, sequenceid=210, filesize=6.9 K 2023-07-16 19:15:32,992 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/.tmp/table/68604c1bec0641aa9f8482f065942829 as hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/table/68604c1bec0641aa9f8482f065942829 2023-07-16 19:15:32,998 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 68604c1bec0641aa9f8482f065942829 2023-07-16 19:15:32,998 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/table/68604c1bec0641aa9f8482f065942829, entries=27, sequenceid=210, filesize=7.2 K 2023-07-16 19:15:32,999 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~76.22 KB/78053, heapSize ~119.87 KB/122744, currentSize=0 B/0 for 1588230740 in 173ms, sequenceid=210, compaction requested=false 2023-07-16 19:15:33,010 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=19 2023-07-16 19:15:33,011 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 19:15:33,011 INFO [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42201,1689534906603; all regions closed. 2023-07-16 19:15:33,011 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 19:15:33,011 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 19:15:33,012 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-16 19:15:33,017 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37881,1689534906681] 2023-07-16 19:15:33,017 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37881,1689534906681; numProcessing=1 2023-07-16 19:15:33,018 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37881,1689534906681 already deleted, retry=false 2023-07-16 19:15:33,018 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37881,1689534906681 expired; onlineServers=3 2023-07-16 19:15:33,019 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35369,1689534910605] 2023-07-16 19:15:33,019 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35369,1689534910605; numProcessing=2 2023-07-16 19:15:33,019 DEBUG [RS:1;jenkins-hbase4:42201] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/oldWALs 2023-07-16 19:15:33,019 INFO [RS:1;jenkins-hbase4:42201] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42201%2C1689534906603:(num 1689534908699) 2023-07-16 19:15:33,019 DEBUG [RS:1;jenkins-hbase4:42201] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:33,019 INFO [RS:1;jenkins-hbase4:42201] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:33,019 INFO [RS:1;jenkins-hbase4:42201] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 19:15:33,019 INFO [RS:1;jenkins-hbase4:42201] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 19:15:33,019 INFO [RS:1;jenkins-hbase4:42201] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 19:15:33,019 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 19:15:33,019 INFO [RS:1;jenkins-hbase4:42201] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 19:15:33,020 INFO [RS:1;jenkins-hbase4:42201] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42201 2023-07-16 19:15:33,021 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35369,1689534910605 already deleted, retry=false 2023-07-16 19:15:33,021 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35369,1689534910605 expired; onlineServers=2 2023-07-16 19:15:33,023 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:33,023 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:33,023 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42201,1689534906603 2023-07-16 19:15:33,024 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42201,1689534906603] 2023-07-16 19:15:33,024 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42201,1689534906603; numProcessing=3 2023-07-16 19:15:33,025 INFO [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46561,1689534906430; all regions closed. 2023-07-16 19:15:33,025 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42201,1689534906603 already deleted, retry=false 2023-07-16 19:15:33,025 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42201,1689534906603 expired; onlineServers=1 2023-07-16 19:15:33,033 DEBUG [RS:0;jenkins-hbase4:46561] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/oldWALs 2023-07-16 19:15:33,033 INFO [RS:0;jenkins-hbase4:46561] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46561%2C1689534906430.meta:.meta(num 1689534911849) 2023-07-16 19:15:33,039 DEBUG [RS:0;jenkins-hbase4:46561] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/oldWALs 2023-07-16 19:15:33,039 INFO [RS:0;jenkins-hbase4:46561] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46561%2C1689534906430:(num 1689534908699) 2023-07-16 19:15:33,039 DEBUG [RS:0;jenkins-hbase4:46561] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:33,039 INFO [RS:0;jenkins-hbase4:46561] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:33,040 INFO [RS:0;jenkins-hbase4:46561] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 19:15:33,040 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 19:15:33,041 INFO [RS:0;jenkins-hbase4:46561] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46561 2023-07-16 19:15:33,044 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46561,1689534906430 2023-07-16 19:15:33,044 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:33,045 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46561,1689534906430] 2023-07-16 19:15:33,045 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46561,1689534906430; numProcessing=4 2023-07-16 19:15:33,046 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46561,1689534906430 already deleted, retry=false 2023-07-16 19:15:33,047 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46561,1689534906430 expired; onlineServers=0 2023-07-16 19:15:33,047 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38143,1689534904450' ***** 2023-07-16 19:15:33,047 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-16 19:15:33,047 DEBUG [M:0;jenkins-hbase4:38143] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b3e3549, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 19:15:33,047 INFO [M:0;jenkins-hbase4:38143] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 19:15:33,050 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-16 19:15:33,050 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:33,051 INFO [M:0;jenkins-hbase4:38143] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@27ce108a{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-16 19:15:33,051 INFO [M:0;jenkins-hbase4:38143] server.AbstractConnector(383): Stopped ServerConnector@93a89c0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:33,051 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 19:15:33,051 INFO [M:0;jenkins-hbase4:38143] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 19:15:33,052 INFO [M:0;jenkins-hbase4:38143] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3e41c305{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 19:15:33,052 INFO [M:0;jenkins-hbase4:38143] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@32d90619{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/hadoop.log.dir/,STOPPED} 2023-07-16 19:15:33,052 INFO [M:0;jenkins-hbase4:38143] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38143,1689534904450 2023-07-16 19:15:33,052 INFO [M:0;jenkins-hbase4:38143] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38143,1689534904450; all regions closed. 2023-07-16 19:15:33,053 DEBUG [M:0;jenkins-hbase4:38143] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:33,053 INFO [M:0;jenkins-hbase4:38143] master.HMaster(1491): Stopping master jetty server 2023-07-16 19:15:33,053 INFO [M:0;jenkins-hbase4:38143] server.AbstractConnector(383): Stopped ServerConnector@5e228df9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:33,054 DEBUG [M:0;jenkins-hbase4:38143] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-16 19:15:33,054 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-16 19:15:33,054 DEBUG [M:0;jenkins-hbase4:38143] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-16 19:15:33,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689534908226] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689534908226,5,FailOnTimeoutGroup] 2023-07-16 19:15:33,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689534908225] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689534908225,5,FailOnTimeoutGroup] 2023-07-16 19:15:33,054 INFO [M:0;jenkins-hbase4:38143] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-16 19:15:33,054 INFO [M:0;jenkins-hbase4:38143] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-16 19:15:33,054 INFO [M:0;jenkins-hbase4:38143] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-16 19:15:33,054 DEBUG [M:0;jenkins-hbase4:38143] master.HMaster(1512): Stopping service threads 2023-07-16 19:15:33,054 INFO [M:0;jenkins-hbase4:38143] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-16 19:15:33,054 ERROR [M:0;jenkins-hbase4:38143] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-16 19:15:33,055 INFO [M:0;jenkins-hbase4:38143] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-16 19:15:33,055 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-16 19:15:33,056 DEBUG [M:0;jenkins-hbase4:38143] zookeeper.ZKUtil(398): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-16 19:15:33,056 WARN [M:0;jenkins-hbase4:38143] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-16 19:15:33,056 INFO [M:0;jenkins-hbase4:38143] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-16 19:15:33,056 INFO [M:0;jenkins-hbase4:38143] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-16 19:15:33,056 DEBUG [M:0;jenkins-hbase4:38143] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 19:15:33,056 INFO [M:0;jenkins-hbase4:38143] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:33,056 DEBUG [M:0;jenkins-hbase4:38143] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:33,056 DEBUG [M:0;jenkins-hbase4:38143] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 19:15:33,056 DEBUG [M:0;jenkins-hbase4:38143] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:33,056 INFO [M:0;jenkins-hbase4:38143] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.22 KB heapSize=621.34 KB 2023-07-16 19:15:33,074 INFO [M:0;jenkins-hbase4:38143] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.22 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9594c50571f944b882497681be4a9893 2023-07-16 19:15:33,079 DEBUG [M:0;jenkins-hbase4:38143] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/9594c50571f944b882497681be4a9893 as hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9594c50571f944b882497681be4a9893 2023-07-16 19:15:33,084 INFO [M:0;jenkins-hbase4:38143] regionserver.HStore(1080): Added hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/9594c50571f944b882497681be4a9893, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-16 19:15:33,085 INFO [M:0;jenkins-hbase4:38143] regionserver.HRegion(2948): Finished flush of dataSize ~519.22 KB/531681, heapSize ~621.33 KB/636240, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 29ms, sequenceid=1152, compaction requested=false 2023-07-16 19:15:33,087 INFO [M:0;jenkins-hbase4:38143] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:33,087 DEBUG [M:0;jenkins-hbase4:38143] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 19:15:33,092 INFO [M:0;jenkins-hbase4:38143] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-16 19:15:33,092 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 19:15:33,093 INFO [M:0;jenkins-hbase4:38143] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38143 2023-07-16 19:15:33,095 DEBUG [M:0;jenkins-hbase4:38143] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,38143,1689534904450 already deleted, retry=false 2023-07-16 19:15:33,162 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:33,162 INFO [RS:0;jenkins-hbase4:46561] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46561,1689534906430; zookeeper connection closed. 2023-07-16 19:15:33,162 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:46561-0x1016f8f37ae0001, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:33,162 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@732bd74d] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@732bd74d 2023-07-16 19:15:33,262 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:33,262 INFO [RS:1;jenkins-hbase4:42201] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42201,1689534906603; zookeeper connection closed. 2023-07-16 19:15:33,262 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:42201-0x1016f8f37ae0002, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:33,262 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@33fc3fda] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@33fc3fda 2023-07-16 19:15:33,362 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:35369-0x1016f8f37ae000b, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:33,362 INFO [RS:3;jenkins-hbase4:35369] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35369,1689534910605; zookeeper connection closed. 2023-07-16 19:15:33,362 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:35369-0x1016f8f37ae000b, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:33,363 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@18c42497] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@18c42497 2023-07-16 19:15:33,462 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:33,462 INFO [RS:2;jenkins-hbase4:37881] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37881,1689534906681; zookeeper connection closed. 2023-07-16 19:15:33,463 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): regionserver:37881-0x1016f8f37ae0003, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:33,466 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7a528b86] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7a528b86 2023-07-16 19:15:33,466 INFO [Listener at localhost/36799] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-16 19:15:33,663 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:33,663 INFO [M:0;jenkins-hbase4:38143] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38143,1689534904450; zookeeper connection closed. 2023-07-16 19:15:33,663 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): master:38143-0x1016f8f37ae0000, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:33,666 WARN [Listener at localhost/36799] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 19:15:33,671 INFO [Listener at localhost/36799] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 19:15:33,774 WARN [BP-1873196108-172.31.14.131-1689534900365 heartbeating to localhost/127.0.0.1:34211] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 19:15:33,774 WARN [BP-1873196108-172.31.14.131-1689534900365 heartbeating to localhost/127.0.0.1:34211] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1873196108-172.31.14.131-1689534900365 (Datanode Uuid e3d6455e-262f-413c-9782-13699c0782a3) service to localhost/127.0.0.1:34211 2023-07-16 19:15:33,776 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/cluster_3dc61365-b8a9-88a9-a1c3-d2e9b69c46bf/dfs/data/data5/current/BP-1873196108-172.31.14.131-1689534900365] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:33,777 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/cluster_3dc61365-b8a9-88a9-a1c3-d2e9b69c46bf/dfs/data/data6/current/BP-1873196108-172.31.14.131-1689534900365] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:33,787 WARN [Listener at localhost/36799] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 19:15:33,790 INFO [Listener at localhost/36799] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 19:15:33,892 WARN [BP-1873196108-172.31.14.131-1689534900365 heartbeating to localhost/127.0.0.1:34211] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 19:15:33,892 WARN [BP-1873196108-172.31.14.131-1689534900365 heartbeating to localhost/127.0.0.1:34211] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1873196108-172.31.14.131-1689534900365 (Datanode Uuid a4ca084f-b382-48e1-bfcf-52c89ddc82dc) service to localhost/127.0.0.1:34211 2023-07-16 19:15:33,893 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/cluster_3dc61365-b8a9-88a9-a1c3-d2e9b69c46bf/dfs/data/data3/current/BP-1873196108-172.31.14.131-1689534900365] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:33,893 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/cluster_3dc61365-b8a9-88a9-a1c3-d2e9b69c46bf/dfs/data/data4/current/BP-1873196108-172.31.14.131-1689534900365] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:33,895 WARN [Listener at localhost/36799] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 19:15:33,898 INFO [Listener at localhost/36799] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 19:15:34,001 WARN [BP-1873196108-172.31.14.131-1689534900365 heartbeating to localhost/127.0.0.1:34211] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 19:15:34,001 WARN [BP-1873196108-172.31.14.131-1689534900365 heartbeating to localhost/127.0.0.1:34211] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1873196108-172.31.14.131-1689534900365 (Datanode Uuid a9d0d1bb-5569-4212-81f2-2aa371553760) service to localhost/127.0.0.1:34211 2023-07-16 19:15:34,002 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/cluster_3dc61365-b8a9-88a9-a1c3-d2e9b69c46bf/dfs/data/data1/current/BP-1873196108-172.31.14.131-1689534900365] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:34,002 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/cluster_3dc61365-b8a9-88a9-a1c3-d2e9b69c46bf/dfs/data/data2/current/BP-1873196108-172.31.14.131-1689534900365] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:34,029 INFO [Listener at localhost/36799] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 19:15:34,148 INFO [Listener at localhost/36799] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-16 19:15:34,195 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-16 19:15:34,195 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-16 19:15:34,195 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/hadoop.log.dir so I do NOT create it in target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef 2023-07-16 19:15:34,195 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/7110e03b-344f-cd4e-9771-f21c7d581436/hadoop.tmp.dir so I do NOT create it in target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef 2023-07-16 19:15:34,195 INFO [Listener at localhost/36799] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/cluster_f0f6e5f9-c632-d1db-bd84-86850d28e4b3, deleteOnExit=true 2023-07-16 19:15:34,195 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-16 19:15:34,195 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/test.cache.data in system properties and HBase conf 2023-07-16 19:15:34,195 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/hadoop.tmp.dir in system properties and HBase conf 2023-07-16 19:15:34,196 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/hadoop.log.dir in system properties and HBase conf 2023-07-16 19:15:34,196 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-16 19:15:34,196 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-16 19:15:34,196 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-16 19:15:34,196 DEBUG [Listener at localhost/36799] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-16 19:15:34,196 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-16 19:15:34,196 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-16 19:15:34,196 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-16 19:15:34,197 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 19:15:34,197 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-16 19:15:34,197 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-16 19:15:34,197 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 19:15:34,197 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 19:15:34,197 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-16 19:15:34,197 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/nfs.dump.dir in system properties and HBase conf 2023-07-16 19:15:34,197 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/java.io.tmpdir in system properties and HBase conf 2023-07-16 19:15:34,197 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 19:15:34,197 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-16 19:15:34,198 INFO [Listener at localhost/36799] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-16 19:15:34,203 WARN [Listener at localhost/36799] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 19:15:34,203 WARN [Listener at localhost/36799] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 19:15:34,245 DEBUG [Listener at localhost/36799-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1016f8f37ae000a, quorum=127.0.0.1:50949, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-16 19:15:34,245 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1016f8f37ae000a, quorum=127.0.0.1:50949, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-16 19:15:34,247 WARN [Listener at localhost/36799] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 19:15:34,249 INFO [Listener at localhost/36799] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 19:15:34,254 INFO [Listener at localhost/36799] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/java.io.tmpdir/Jetty_localhost_32835_hdfs____bvhi11/webapp 2023-07-16 19:15:34,348 INFO [Listener at localhost/36799] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:32835 2023-07-16 19:15:34,353 WARN [Listener at localhost/36799] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 19:15:34,353 WARN [Listener at localhost/36799] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 19:15:34,407 WARN [Listener at localhost/43643] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 19:15:34,431 WARN [Listener at localhost/43643] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 19:15:34,433 WARN [Listener at localhost/43643] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 19:15:34,436 INFO [Listener at localhost/43643] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 19:15:34,442 INFO [Listener at localhost/43643] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/java.io.tmpdir/Jetty_localhost_36355_datanode____juxdbr/webapp 2023-07-16 19:15:34,539 INFO [Listener at localhost/43643] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36355 2023-07-16 19:15:34,546 WARN [Listener at localhost/35379] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 19:15:34,568 WARN [Listener at localhost/35379] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-16 19:15:34,654 WARN [Listener at localhost/35379] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 19:15:34,656 WARN [Listener at localhost/35379] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 19:15:34,657 INFO [Listener at localhost/35379] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 19:15:34,661 INFO [Listener at localhost/35379] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/java.io.tmpdir/Jetty_localhost_42113_datanode____.1qdtqc/webapp 2023-07-16 19:15:34,716 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe577de841d8145bc: Processing first storage report for DS-af6ec8ac-0c35-4ce0-9a35-4a1a157ee284 from datanode 139ac45a-2a52-49aa-9bba-13f32dde85b5 2023-07-16 19:15:34,717 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe577de841d8145bc: from storage DS-af6ec8ac-0c35-4ce0-9a35-4a1a157ee284 node DatanodeRegistration(127.0.0.1:34819, datanodeUuid=139ac45a-2a52-49aa-9bba-13f32dde85b5, infoPort=46057, infoSecurePort=0, ipcPort=35379, storageInfo=lv=-57;cid=testClusterID;nsid=1737885871;c=1689534934206), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-16 19:15:34,717 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe577de841d8145bc: Processing first storage report for DS-b7273765-ebe7-48a7-ac18-c94f12770214 from datanode 139ac45a-2a52-49aa-9bba-13f32dde85b5 2023-07-16 19:15:34,717 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe577de841d8145bc: from storage DS-b7273765-ebe7-48a7-ac18-c94f12770214 node DatanodeRegistration(127.0.0.1:34819, datanodeUuid=139ac45a-2a52-49aa-9bba-13f32dde85b5, infoPort=46057, infoSecurePort=0, ipcPort=35379, storageInfo=lv=-57;cid=testClusterID;nsid=1737885871;c=1689534934206), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 19:15:34,807 INFO [Listener at localhost/35379] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42113 2023-07-16 19:15:34,827 WARN [Listener at localhost/45053] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 19:15:34,852 WARN [Listener at localhost/45053] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 19:15:34,857 WARN [Listener at localhost/45053] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 19:15:34,859 INFO [Listener at localhost/45053] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 19:15:34,866 INFO [Listener at localhost/45053] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/java.io.tmpdir/Jetty_localhost_41751_datanode____.9hx86b/webapp 2023-07-16 19:15:34,936 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 19:15:34,936 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 19:15:34,936 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 19:15:34,943 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd23115bf3b615e9e: Processing first storage report for DS-a031fdd0-72cd-44f1-a1ac-c4325c9dd77f from datanode acd51154-c4c7-46ec-ba4c-cfd639a6611e 2023-07-16 19:15:34,943 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd23115bf3b615e9e: from storage DS-a031fdd0-72cd-44f1-a1ac-c4325c9dd77f node DatanodeRegistration(127.0.0.1:39811, datanodeUuid=acd51154-c4c7-46ec-ba4c-cfd639a6611e, infoPort=45771, infoSecurePort=0, ipcPort=45053, storageInfo=lv=-57;cid=testClusterID;nsid=1737885871;c=1689534934206), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 19:15:34,943 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd23115bf3b615e9e: Processing first storage report for DS-9a2a643d-0cd6-474a-b426-8c70ddaf9e56 from datanode acd51154-c4c7-46ec-ba4c-cfd639a6611e 2023-07-16 19:15:34,943 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd23115bf3b615e9e: from storage DS-9a2a643d-0cd6-474a-b426-8c70ddaf9e56 node DatanodeRegistration(127.0.0.1:39811, datanodeUuid=acd51154-c4c7-46ec-ba4c-cfd639a6611e, infoPort=45771, infoSecurePort=0, ipcPort=45053, storageInfo=lv=-57;cid=testClusterID;nsid=1737885871;c=1689534934206), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 19:15:34,977 INFO [Listener at localhost/45053] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41751 2023-07-16 19:15:34,989 WARN [Listener at localhost/36007] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 19:15:35,091 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe15e375c8757aa3e: Processing first storage report for DS-0d10d8a0-1d60-4e8a-ac99-a532cf359f26 from datanode 675ece7b-01be-4254-aab4-0f9c149e35a0 2023-07-16 19:15:35,092 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe15e375c8757aa3e: from storage DS-0d10d8a0-1d60-4e8a-ac99-a532cf359f26 node DatanodeRegistration(127.0.0.1:35989, datanodeUuid=675ece7b-01be-4254-aab4-0f9c149e35a0, infoPort=43325, infoSecurePort=0, ipcPort=36007, storageInfo=lv=-57;cid=testClusterID;nsid=1737885871;c=1689534934206), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-16 19:15:35,092 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe15e375c8757aa3e: Processing first storage report for DS-dd57ff69-1297-4943-b2d2-0cf595f8b3dd from datanode 675ece7b-01be-4254-aab4-0f9c149e35a0 2023-07-16 19:15:35,092 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe15e375c8757aa3e: from storage DS-dd57ff69-1297-4943-b2d2-0cf595f8b3dd node DatanodeRegistration(127.0.0.1:35989, datanodeUuid=675ece7b-01be-4254-aab4-0f9c149e35a0, infoPort=43325, infoSecurePort=0, ipcPort=36007, storageInfo=lv=-57;cid=testClusterID;nsid=1737885871;c=1689534934206), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 19:15:35,102 DEBUG [Listener at localhost/36007] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef 2023-07-16 19:15:35,107 INFO [Listener at localhost/36007] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/cluster_f0f6e5f9-c632-d1db-bd84-86850d28e4b3/zookeeper_0, clientPort=56571, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/cluster_f0f6e5f9-c632-d1db-bd84-86850d28e4b3/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/cluster_f0f6e5f9-c632-d1db-bd84-86850d28e4b3/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-16 19:15:35,109 INFO [Listener at localhost/36007] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56571 2023-07-16 19:15:35,109 INFO [Listener at localhost/36007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:35,110 INFO [Listener at localhost/36007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:35,131 INFO [Listener at localhost/36007] util.FSUtils(471): Created version file at hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d with version=8 2023-07-16 19:15:35,132 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/hbase-staging 2023-07-16 19:15:35,133 DEBUG [Listener at localhost/36007] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-16 19:15:35,133 DEBUG [Listener at localhost/36007] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-16 19:15:35,133 DEBUG [Listener at localhost/36007] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-16 19:15:35,133 DEBUG [Listener at localhost/36007] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-16 19:15:35,134 INFO [Listener at localhost/36007] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 19:15:35,135 INFO [Listener at localhost/36007] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:35,135 INFO [Listener at localhost/36007] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:35,135 INFO [Listener at localhost/36007] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 19:15:35,135 INFO [Listener at localhost/36007] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:35,135 INFO [Listener at localhost/36007] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 19:15:35,135 INFO [Listener at localhost/36007] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 19:15:35,136 INFO [Listener at localhost/36007] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41265 2023-07-16 19:15:35,136 INFO [Listener at localhost/36007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:35,137 INFO [Listener at localhost/36007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:35,138 INFO [Listener at localhost/36007] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41265 connecting to ZooKeeper ensemble=127.0.0.1:56571 2023-07-16 19:15:35,146 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:412650x0, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:35,147 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41265-0x1016f8fb3430000 connected 2023-07-16 19:15:35,161 DEBUG [Listener at localhost/36007] zookeeper.ZKUtil(164): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 19:15:35,161 DEBUG [Listener at localhost/36007] zookeeper.ZKUtil(164): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:35,161 DEBUG [Listener at localhost/36007] zookeeper.ZKUtil(164): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 19:15:35,162 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41265 2023-07-16 19:15:35,162 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41265 2023-07-16 19:15:35,162 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41265 2023-07-16 19:15:35,163 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41265 2023-07-16 19:15:35,163 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41265 2023-07-16 19:15:35,165 INFO [Listener at localhost/36007] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 19:15:35,165 INFO [Listener at localhost/36007] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 19:15:35,165 INFO [Listener at localhost/36007] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 19:15:35,166 INFO [Listener at localhost/36007] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-16 19:15:35,166 INFO [Listener at localhost/36007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 19:15:35,166 INFO [Listener at localhost/36007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 19:15:35,166 INFO [Listener at localhost/36007] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 19:15:35,167 INFO [Listener at localhost/36007] http.HttpServer(1146): Jetty bound to port 43571 2023-07-16 19:15:35,167 INFO [Listener at localhost/36007] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:35,170 INFO [Listener at localhost/36007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:35,170 INFO [Listener at localhost/36007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@56f748b4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/hadoop.log.dir/,AVAILABLE} 2023-07-16 19:15:35,170 INFO [Listener at localhost/36007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:35,171 INFO [Listener at localhost/36007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4879aa59{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 19:15:35,181 INFO [Listener at localhost/36007] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 19:15:35,183 INFO [Listener at localhost/36007] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 19:15:35,183 INFO [Listener at localhost/36007] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 19:15:35,183 INFO [Listener at localhost/36007] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 19:15:35,184 INFO [Listener at localhost/36007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:35,186 INFO [Listener at localhost/36007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4920ba9{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-16 19:15:35,187 INFO [Listener at localhost/36007] server.AbstractConnector(333): Started ServerConnector@414299ab{HTTP/1.1, (http/1.1)}{0.0.0.0:43571} 2023-07-16 19:15:35,187 INFO [Listener at localhost/36007] server.Server(415): Started @36903ms 2023-07-16 19:15:35,188 INFO [Listener at localhost/36007] master.HMaster(444): hbase.rootdir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d, hbase.cluster.distributed=false 2023-07-16 19:15:35,205 INFO [Listener at localhost/36007] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 19:15:35,206 INFO [Listener at localhost/36007] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:35,206 INFO [Listener at localhost/36007] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:35,206 INFO [Listener at localhost/36007] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 19:15:35,206 INFO [Listener at localhost/36007] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:35,206 INFO [Listener at localhost/36007] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 19:15:35,206 INFO [Listener at localhost/36007] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 19:15:35,207 INFO [Listener at localhost/36007] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38201 2023-07-16 19:15:35,207 INFO [Listener at localhost/36007] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 19:15:35,208 DEBUG [Listener at localhost/36007] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 19:15:35,208 INFO [Listener at localhost/36007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:35,210 INFO [Listener at localhost/36007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:35,210 INFO [Listener at localhost/36007] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38201 connecting to ZooKeeper ensemble=127.0.0.1:56571 2023-07-16 19:15:35,217 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:382010x0, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:35,219 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38201-0x1016f8fb3430001 connected 2023-07-16 19:15:35,219 DEBUG [Listener at localhost/36007] zookeeper.ZKUtil(164): regionserver:38201-0x1016f8fb3430001, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 19:15:35,220 DEBUG [Listener at localhost/36007] zookeeper.ZKUtil(164): regionserver:38201-0x1016f8fb3430001, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:35,220 DEBUG [Listener at localhost/36007] zookeeper.ZKUtil(164): regionserver:38201-0x1016f8fb3430001, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 19:15:35,221 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38201 2023-07-16 19:15:35,222 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38201 2023-07-16 19:15:35,222 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38201 2023-07-16 19:15:35,230 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38201 2023-07-16 19:15:35,231 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38201 2023-07-16 19:15:35,233 INFO [Listener at localhost/36007] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 19:15:35,233 INFO [Listener at localhost/36007] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 19:15:35,233 INFO [Listener at localhost/36007] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 19:15:35,234 INFO [Listener at localhost/36007] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 19:15:35,234 INFO [Listener at localhost/36007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 19:15:35,234 INFO [Listener at localhost/36007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 19:15:35,234 INFO [Listener at localhost/36007] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 19:15:35,236 INFO [Listener at localhost/36007] http.HttpServer(1146): Jetty bound to port 37581 2023-07-16 19:15:35,236 INFO [Listener at localhost/36007] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:35,242 INFO [Listener at localhost/36007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:35,243 INFO [Listener at localhost/36007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@160ee4bd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/hadoop.log.dir/,AVAILABLE} 2023-07-16 19:15:35,243 INFO [Listener at localhost/36007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:35,243 INFO [Listener at localhost/36007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@79cb8f96{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 19:15:35,251 INFO [Listener at localhost/36007] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 19:15:35,252 INFO [Listener at localhost/36007] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 19:15:35,252 INFO [Listener at localhost/36007] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 19:15:35,252 INFO [Listener at localhost/36007] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 19:15:35,255 INFO [Listener at localhost/36007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:35,256 INFO [Listener at localhost/36007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2d6aa7c8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:35,260 INFO [Listener at localhost/36007] server.AbstractConnector(333): Started ServerConnector@64edd7cb{HTTP/1.1, (http/1.1)}{0.0.0.0:37581} 2023-07-16 19:15:35,260 INFO [Listener at localhost/36007] server.Server(415): Started @36976ms 2023-07-16 19:15:35,274 INFO [Listener at localhost/36007] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 19:15:35,275 INFO [Listener at localhost/36007] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:35,275 INFO [Listener at localhost/36007] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:35,275 INFO [Listener at localhost/36007] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 19:15:35,275 INFO [Listener at localhost/36007] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:35,275 INFO [Listener at localhost/36007] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 19:15:35,275 INFO [Listener at localhost/36007] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 19:15:35,276 INFO [Listener at localhost/36007] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41819 2023-07-16 19:15:35,276 INFO [Listener at localhost/36007] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 19:15:35,278 DEBUG [Listener at localhost/36007] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 19:15:35,279 INFO [Listener at localhost/36007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:35,281 INFO [Listener at localhost/36007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:35,282 INFO [Listener at localhost/36007] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41819 connecting to ZooKeeper ensemble=127.0.0.1:56571 2023-07-16 19:15:35,289 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:418190x0, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:35,294 DEBUG [Listener at localhost/36007] zookeeper.ZKUtil(164): regionserver:418190x0, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 19:15:35,295 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41819-0x1016f8fb3430002 connected 2023-07-16 19:15:35,296 DEBUG [Listener at localhost/36007] zookeeper.ZKUtil(164): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:35,297 DEBUG [Listener at localhost/36007] zookeeper.ZKUtil(164): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 19:15:35,307 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41819 2023-07-16 19:15:35,307 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41819 2023-07-16 19:15:35,307 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41819 2023-07-16 19:15:35,310 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41819 2023-07-16 19:15:35,310 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41819 2023-07-16 19:15:35,313 INFO [Listener at localhost/36007] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 19:15:35,313 INFO [Listener at localhost/36007] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 19:15:35,313 INFO [Listener at localhost/36007] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 19:15:35,313 INFO [Listener at localhost/36007] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 19:15:35,314 INFO [Listener at localhost/36007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 19:15:35,314 INFO [Listener at localhost/36007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 19:15:35,314 INFO [Listener at localhost/36007] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 19:15:35,314 INFO [Listener at localhost/36007] http.HttpServer(1146): Jetty bound to port 42177 2023-07-16 19:15:35,315 INFO [Listener at localhost/36007] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:35,320 INFO [Listener at localhost/36007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:35,320 INFO [Listener at localhost/36007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@79be72f9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/hadoop.log.dir/,AVAILABLE} 2023-07-16 19:15:35,321 INFO [Listener at localhost/36007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:35,321 INFO [Listener at localhost/36007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@51764913{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 19:15:35,328 INFO [Listener at localhost/36007] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 19:15:35,329 INFO [Listener at localhost/36007] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 19:15:35,329 INFO [Listener at localhost/36007] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 19:15:35,329 INFO [Listener at localhost/36007] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 19:15:35,331 INFO [Listener at localhost/36007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:35,331 INFO [Listener at localhost/36007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7e287aab{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:35,333 INFO [Listener at localhost/36007] server.AbstractConnector(333): Started ServerConnector@535d47bf{HTTP/1.1, (http/1.1)}{0.0.0.0:42177} 2023-07-16 19:15:35,333 INFO [Listener at localhost/36007] server.Server(415): Started @37049ms 2023-07-16 19:15:35,345 INFO [Listener at localhost/36007] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 19:15:35,346 INFO [Listener at localhost/36007] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:35,346 INFO [Listener at localhost/36007] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:35,346 INFO [Listener at localhost/36007] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 19:15:35,346 INFO [Listener at localhost/36007] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:35,346 INFO [Listener at localhost/36007] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 19:15:35,346 INFO [Listener at localhost/36007] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 19:15:35,347 INFO [Listener at localhost/36007] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34171 2023-07-16 19:15:35,347 INFO [Listener at localhost/36007] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 19:15:35,349 DEBUG [Listener at localhost/36007] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 19:15:35,349 INFO [Listener at localhost/36007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:35,350 INFO [Listener at localhost/36007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:35,351 INFO [Listener at localhost/36007] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34171 connecting to ZooKeeper ensemble=127.0.0.1:56571 2023-07-16 19:15:35,354 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:341710x0, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:35,355 DEBUG [Listener at localhost/36007] zookeeper.ZKUtil(164): regionserver:341710x0, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 19:15:35,356 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34171-0x1016f8fb3430003 connected 2023-07-16 19:15:35,356 DEBUG [Listener at localhost/36007] zookeeper.ZKUtil(164): regionserver:34171-0x1016f8fb3430003, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:35,357 DEBUG [Listener at localhost/36007] zookeeper.ZKUtil(164): regionserver:34171-0x1016f8fb3430003, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 19:15:35,357 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34171 2023-07-16 19:15:35,357 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34171 2023-07-16 19:15:35,358 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34171 2023-07-16 19:15:35,359 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34171 2023-07-16 19:15:35,359 DEBUG [Listener at localhost/36007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34171 2023-07-16 19:15:35,361 INFO [Listener at localhost/36007] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 19:15:35,361 INFO [Listener at localhost/36007] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 19:15:35,361 INFO [Listener at localhost/36007] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 19:15:35,362 INFO [Listener at localhost/36007] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 19:15:35,362 INFO [Listener at localhost/36007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 19:15:35,362 INFO [Listener at localhost/36007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 19:15:35,362 INFO [Listener at localhost/36007] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 19:15:35,363 INFO [Listener at localhost/36007] http.HttpServer(1146): Jetty bound to port 38227 2023-07-16 19:15:35,363 INFO [Listener at localhost/36007] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:35,364 INFO [Listener at localhost/36007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:35,364 INFO [Listener at localhost/36007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@37a3ac68{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/hadoop.log.dir/,AVAILABLE} 2023-07-16 19:15:35,365 INFO [Listener at localhost/36007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:35,365 INFO [Listener at localhost/36007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2689a462{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 19:15:35,369 INFO [Listener at localhost/36007] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 19:15:35,370 INFO [Listener at localhost/36007] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 19:15:35,370 INFO [Listener at localhost/36007] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 19:15:35,370 INFO [Listener at localhost/36007] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 19:15:35,371 INFO [Listener at localhost/36007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:35,371 INFO [Listener at localhost/36007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2492ed2c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:35,373 INFO [Listener at localhost/36007] server.AbstractConnector(333): Started ServerConnector@7947507{HTTP/1.1, (http/1.1)}{0.0.0.0:38227} 2023-07-16 19:15:35,373 INFO [Listener at localhost/36007] server.Server(415): Started @37089ms 2023-07-16 19:15:35,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:35,384 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@614b4eae{HTTP/1.1, (http/1.1)}{0.0.0.0:36363} 2023-07-16 19:15:35,385 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @37100ms 2023-07-16 19:15:35,385 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,41265,1689534935134 2023-07-16 19:15:35,387 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 19:15:35,387 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,41265,1689534935134 2023-07-16 19:15:35,389 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:34171-0x1016f8fb3430003, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 19:15:35,389 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 19:15:35,389 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:38201-0x1016f8fb3430001, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 19:15:35,390 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:35,389 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 19:15:35,391 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 19:15:35,399 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 19:15:35,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,41265,1689534935134 from backup master directory 2023-07-16 19:15:35,400 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,41265,1689534935134 2023-07-16 19:15:35,400 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 19:15:35,400 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 19:15:35,400 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,41265,1689534935134 2023-07-16 19:15:35,423 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/hbase.id with ID: ec4bf4c3-5694-46ad-804c-ee358817e070 2023-07-16 19:15:35,436 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:35,439 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:35,449 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x2d4ace27 to 127.0.0.1:56571 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:35,455 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@659cf391, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:35,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:35,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-16 19:15:35,456 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:35,457 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/MasterData/data/master/store-tmp 2023-07-16 19:15:35,467 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:35,467 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 19:15:35,467 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:35,467 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:35,467 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 19:15:35,467 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:35,467 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:35,467 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 19:15:35,468 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/MasterData/WALs/jenkins-hbase4.apache.org,41265,1689534935134 2023-07-16 19:15:35,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41265%2C1689534935134, suffix=, logDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/MasterData/WALs/jenkins-hbase4.apache.org,41265,1689534935134, archiveDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/MasterData/oldWALs, maxLogs=10 2023-07-16 19:15:35,493 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34819,DS-af6ec8ac-0c35-4ce0-9a35-4a1a157ee284,DISK] 2023-07-16 19:15:35,499 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39811,DS-a031fdd0-72cd-44f1-a1ac-c4325c9dd77f,DISK] 2023-07-16 19:15:35,499 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35989,DS-0d10d8a0-1d60-4e8a-ac99-a532cf359f26,DISK] 2023-07-16 19:15:35,515 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/MasterData/WALs/jenkins-hbase4.apache.org,41265,1689534935134/jenkins-hbase4.apache.org%2C41265%2C1689534935134.1689534935471 2023-07-16 19:15:35,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34819,DS-af6ec8ac-0c35-4ce0-9a35-4a1a157ee284,DISK], DatanodeInfoWithStorage[127.0.0.1:39811,DS-a031fdd0-72cd-44f1-a1ac-c4325c9dd77f,DISK], DatanodeInfoWithStorage[127.0.0.1:35989,DS-0d10d8a0-1d60-4e8a-ac99-a532cf359f26,DISK]] 2023-07-16 19:15:35,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:35,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:35,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:35,516 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:35,520 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:35,522 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-16 19:15:35,523 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-16 19:15:35,523 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:35,524 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:35,525 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:35,528 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:35,531 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:35,531 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10825219840, jitterRate=0.00817716121673584}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:35,531 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 19:15:35,532 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-16 19:15:35,533 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-16 19:15:35,533 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-16 19:15:35,533 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-16 19:15:35,534 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-16 19:15:35,534 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-16 19:15:35,534 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-16 19:15:35,535 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-16 19:15:35,536 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-16 19:15:35,537 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-16 19:15:35,537 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-16 19:15:35,538 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-16 19:15:35,539 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:35,540 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-16 19:15:35,541 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-16 19:15:35,542 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-16 19:15:35,543 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:35,543 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:35,544 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:35,543 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:38201-0x1016f8fb3430001, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:35,547 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:34171-0x1016f8fb3430003, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:35,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,41265,1689534935134, sessionid=0x1016f8fb3430000, setting cluster-up flag (Was=false) 2023-07-16 19:15:35,551 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:35,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-16 19:15:35,556 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41265,1689534935134 2023-07-16 19:15:35,560 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:35,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-16 19:15:35,566 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41265,1689534935134 2023-07-16 19:15:35,567 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.hbase-snapshot/.tmp 2023-07-16 19:15:35,572 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-16 19:15:35,572 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-16 19:15:35,575 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-16 19:15:35,575 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-16 19:15:35,575 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-16 19:15:35,577 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-16 19:15:35,577 INFO [RS:0;jenkins-hbase4:38201] regionserver.HRegionServer(951): ClusterId : ec4bf4c3-5694-46ad-804c-ee358817e070 2023-07-16 19:15:35,577 DEBUG [RS:0;jenkins-hbase4:38201] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 19:15:35,579 DEBUG [RS:0;jenkins-hbase4:38201] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 19:15:35,579 DEBUG [RS:0;jenkins-hbase4:38201] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 19:15:35,579 INFO [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(951): ClusterId : ec4bf4c3-5694-46ad-804c-ee358817e070 2023-07-16 19:15:35,579 DEBUG [RS:1;jenkins-hbase4:41819] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 19:15:35,581 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41265,1689534935134] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 19:15:35,582 DEBUG [RS:0;jenkins-hbase4:38201] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 19:15:35,583 DEBUG [RS:1;jenkins-hbase4:41819] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 19:15:35,583 DEBUG [RS:1;jenkins-hbase4:41819] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 19:15:35,584 INFO [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(951): ClusterId : ec4bf4c3-5694-46ad-804c-ee358817e070 2023-07-16 19:15:35,585 DEBUG [RS:2;jenkins-hbase4:34171] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 19:15:35,586 DEBUG [RS:1;jenkins-hbase4:41819] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 19:15:35,587 DEBUG [RS:2;jenkins-hbase4:34171] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 19:15:35,587 DEBUG [RS:2;jenkins-hbase4:34171] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 19:15:35,589 DEBUG [RS:2;jenkins-hbase4:34171] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 19:15:35,594 DEBUG [RS:2;jenkins-hbase4:34171] zookeeper.ReadOnlyZKClient(139): Connect 0x50face39 to 127.0.0.1:56571 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:35,594 DEBUG [RS:1;jenkins-hbase4:41819] zookeeper.ReadOnlyZKClient(139): Connect 0x2f7102fa to 127.0.0.1:56571 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:35,594 DEBUG [RS:0;jenkins-hbase4:38201] zookeeper.ReadOnlyZKClient(139): Connect 0x4b732aae to 127.0.0.1:56571 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:35,609 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 19:15:35,609 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 19:15:35,609 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 19:15:35,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 19:15:35,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 19:15:35,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 19:15:35,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 19:15:35,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 19:15:35,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-16 19:15:35,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 19:15:35,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,625 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689534965625 2023-07-16 19:15:35,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-16 19:15:35,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-16 19:15:35,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-16 19:15:35,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-16 19:15:35,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-16 19:15:35,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-16 19:15:35,626 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,630 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-16 19:15:35,630 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-16 19:15:35,630 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-16 19:15:35,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-16 19:15:35,635 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-16 19:15:35,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689534935635,5,FailOnTimeoutGroup] 2023-07-16 19:15:35,636 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689534935636,5,FailOnTimeoutGroup] 2023-07-16 19:15:35,636 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 19:15:35,636 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,636 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-16 19:15:35,636 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-16 19:15:35,637 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,637 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,638 DEBUG [RS:2;jenkins-hbase4:34171] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5e3d2a56, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:35,638 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:35,638 DEBUG [RS:2;jenkins-hbase4:34171] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5a02a549, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 19:15:35,647 DEBUG [RS:0;jenkins-hbase4:38201] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@418b41de, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:35,647 DEBUG [RS:1;jenkins-hbase4:41819] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2589698a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:35,647 DEBUG [RS:0;jenkins-hbase4:38201] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@489c78b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 19:15:35,647 DEBUG [RS:1;jenkins-hbase4:41819] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@608bf35f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 19:15:35,657 DEBUG [RS:2;jenkins-hbase4:34171] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:34171 2023-07-16 19:15:35,657 INFO [RS:2;jenkins-hbase4:34171] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 19:15:35,657 INFO [RS:2;jenkins-hbase4:34171] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 19:15:35,657 DEBUG [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 19:15:35,658 INFO [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41265,1689534935134 with isa=jenkins-hbase4.apache.org/172.31.14.131:34171, startcode=1689534935345 2023-07-16 19:15:35,658 DEBUG [RS:2;jenkins-hbase4:34171] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 19:15:35,660 DEBUG [RS:0;jenkins-hbase4:38201] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:38201 2023-07-16 19:15:35,660 INFO [RS:0;jenkins-hbase4:38201] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 19:15:35,660 INFO [RS:0;jenkins-hbase4:38201] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 19:15:35,660 DEBUG [RS:0;jenkins-hbase4:38201] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 19:15:35,660 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55917, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 19:15:35,660 DEBUG [RS:1;jenkins-hbase4:41819] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:41819 2023-07-16 19:15:35,660 INFO [RS:0;jenkins-hbase4:38201] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41265,1689534935134 with isa=jenkins-hbase4.apache.org/172.31.14.131:38201, startcode=1689534935205 2023-07-16 19:15:35,662 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41265] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34171,1689534935345 2023-07-16 19:15:35,662 DEBUG [RS:0;jenkins-hbase4:38201] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 19:15:35,660 INFO [RS:1;jenkins-hbase4:41819] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 19:15:35,662 INFO [RS:1;jenkins-hbase4:41819] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 19:15:35,662 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41265,1689534935134] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 19:15:35,662 DEBUG [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 19:15:35,663 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41265,1689534935134] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-16 19:15:35,663 DEBUG [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d 2023-07-16 19:15:35,663 DEBUG [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43643 2023-07-16 19:15:35,663 DEBUG [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43571 2023-07-16 19:15:35,664 INFO [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41265,1689534935134 with isa=jenkins-hbase4.apache.org/172.31.14.131:41819, startcode=1689534935274 2023-07-16 19:15:35,664 DEBUG [RS:1;jenkins-hbase4:41819] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 19:15:35,665 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:35,665 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47937, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 19:15:35,665 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41265] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38201,1689534935205 2023-07-16 19:15:35,665 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41193, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 19:15:35,666 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41265,1689534935134] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 19:15:35,665 DEBUG [RS:2;jenkins-hbase4:34171] zookeeper.ZKUtil(162): regionserver:34171-0x1016f8fb3430003, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34171,1689534935345 2023-07-16 19:15:35,666 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41265,1689534935134] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-16 19:15:35,666 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41265] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:35,666 DEBUG [RS:0;jenkins-hbase4:38201] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d 2023-07-16 19:15:35,666 WARN [RS:2;jenkins-hbase4:34171] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 19:15:35,666 DEBUG [RS:0;jenkins-hbase4:38201] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43643 2023-07-16 19:15:35,666 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41265,1689534935134] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 19:15:35,666 DEBUG [RS:0;jenkins-hbase4:38201] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43571 2023-07-16 19:15:35,666 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41265,1689534935134] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-16 19:15:35,666 INFO [RS:2;jenkins-hbase4:34171] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:35,666 DEBUG [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d 2023-07-16 19:15:35,666 DEBUG [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/WALs/jenkins-hbase4.apache.org,34171,1689534935345 2023-07-16 19:15:35,666 DEBUG [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43643 2023-07-16 19:15:35,667 DEBUG [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43571 2023-07-16 19:15:35,672 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38201,1689534935205] 2023-07-16 19:15:35,672 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34171,1689534935345] 2023-07-16 19:15:35,672 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:35,673 DEBUG [RS:1;jenkins-hbase4:41819] zookeeper.ZKUtil(162): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:35,673 WARN [RS:1;jenkins-hbase4:41819] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 19:15:35,673 INFO [RS:1;jenkins-hbase4:41819] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:35,673 DEBUG [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/WALs/jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:35,677 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41819,1689534935274] 2023-07-16 19:15:35,677 DEBUG [RS:0;jenkins-hbase4:38201] zookeeper.ZKUtil(162): regionserver:38201-0x1016f8fb3430001, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38201,1689534935205 2023-07-16 19:15:35,677 WARN [RS:0;jenkins-hbase4:38201] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 19:15:35,677 INFO [RS:0;jenkins-hbase4:38201] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:35,677 DEBUG [RS:0;jenkins-hbase4:38201] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/WALs/jenkins-hbase4.apache.org,38201,1689534935205 2023-07-16 19:15:35,678 DEBUG [RS:2;jenkins-hbase4:34171] zookeeper.ZKUtil(162): regionserver:34171-0x1016f8fb3430003, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34171,1689534935345 2023-07-16 19:15:35,679 DEBUG [RS:2;jenkins-hbase4:34171] zookeeper.ZKUtil(162): regionserver:34171-0x1016f8fb3430003, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:35,679 DEBUG [RS:2;jenkins-hbase4:34171] zookeeper.ZKUtil(162): regionserver:34171-0x1016f8fb3430003, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38201,1689534935205 2023-07-16 19:15:35,680 DEBUG [RS:2;jenkins-hbase4:34171] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 19:15:35,681 INFO [RS:2;jenkins-hbase4:34171] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 19:15:35,685 INFO [RS:2;jenkins-hbase4:34171] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 19:15:35,687 INFO [RS:2;jenkins-hbase4:34171] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 19:15:35,687 INFO [RS:2;jenkins-hbase4:34171] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,694 INFO [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 19:15:35,698 INFO [RS:2;jenkins-hbase4:34171] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,698 DEBUG [RS:2;jenkins-hbase4:34171] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,698 DEBUG [RS:2;jenkins-hbase4:34171] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,699 DEBUG [RS:2;jenkins-hbase4:34171] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,699 DEBUG [RS:2;jenkins-hbase4:34171] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,699 DEBUG [RS:2;jenkins-hbase4:34171] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,699 DEBUG [RS:2;jenkins-hbase4:34171] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 19:15:35,699 DEBUG [RS:2;jenkins-hbase4:34171] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,699 DEBUG [RS:2;jenkins-hbase4:34171] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,699 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:35,699 DEBUG [RS:2;jenkins-hbase4:34171] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,700 DEBUG [RS:0;jenkins-hbase4:38201] zookeeper.ZKUtil(162): regionserver:38201-0x1016f8fb3430001, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34171,1689534935345 2023-07-16 19:15:35,700 DEBUG [RS:2;jenkins-hbase4:34171] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,700 DEBUG [RS:1;jenkins-hbase4:41819] zookeeper.ZKUtil(162): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34171,1689534935345 2023-07-16 19:15:35,700 DEBUG [RS:0;jenkins-hbase4:38201] zookeeper.ZKUtil(162): regionserver:38201-0x1016f8fb3430001, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:35,700 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:35,700 DEBUG [RS:1;jenkins-hbase4:41819] zookeeper.ZKUtil(162): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:35,700 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d 2023-07-16 19:15:35,701 DEBUG [RS:0;jenkins-hbase4:38201] zookeeper.ZKUtil(162): regionserver:38201-0x1016f8fb3430001, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38201,1689534935205 2023-07-16 19:15:35,701 DEBUG [RS:1;jenkins-hbase4:41819] zookeeper.ZKUtil(162): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38201,1689534935205 2023-07-16 19:15:35,702 DEBUG [RS:0;jenkins-hbase4:38201] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 19:15:35,702 DEBUG [RS:1;jenkins-hbase4:41819] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 19:15:35,702 INFO [RS:0;jenkins-hbase4:38201] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 19:15:35,702 INFO [RS:1;jenkins-hbase4:41819] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 19:15:35,708 INFO [RS:1;jenkins-hbase4:41819] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 19:15:35,708 INFO [RS:0;jenkins-hbase4:38201] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 19:15:35,715 INFO [RS:2;jenkins-hbase4:34171] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,715 INFO [RS:2;jenkins-hbase4:34171] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,715 INFO [RS:2;jenkins-hbase4:34171] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,715 INFO [RS:2;jenkins-hbase4:34171] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,719 INFO [RS:0;jenkins-hbase4:38201] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 19:15:35,719 INFO [RS:0;jenkins-hbase4:38201] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,724 INFO [RS:1;jenkins-hbase4:41819] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 19:15:35,724 INFO [RS:1;jenkins-hbase4:41819] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,725 INFO [RS:0;jenkins-hbase4:38201] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 19:15:35,725 INFO [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 19:15:35,728 INFO [RS:1;jenkins-hbase4:41819] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,728 DEBUG [RS:1;jenkins-hbase4:41819] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,728 DEBUG [RS:1;jenkins-hbase4:41819] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,728 DEBUG [RS:1;jenkins-hbase4:41819] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,728 DEBUG [RS:1;jenkins-hbase4:41819] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,728 DEBUG [RS:1;jenkins-hbase4:41819] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,728 DEBUG [RS:1;jenkins-hbase4:41819] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 19:15:35,728 DEBUG [RS:1;jenkins-hbase4:41819] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,728 DEBUG [RS:1;jenkins-hbase4:41819] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,728 DEBUG [RS:1;jenkins-hbase4:41819] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,728 DEBUG [RS:1;jenkins-hbase4:41819] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,734 INFO [RS:0;jenkins-hbase4:38201] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,735 DEBUG [RS:0;jenkins-hbase4:38201] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,735 DEBUG [RS:0;jenkins-hbase4:38201] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,735 DEBUG [RS:0;jenkins-hbase4:38201] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,735 DEBUG [RS:0;jenkins-hbase4:38201] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,735 DEBUG [RS:0;jenkins-hbase4:38201] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,735 DEBUG [RS:0;jenkins-hbase4:38201] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 19:15:35,735 DEBUG [RS:0;jenkins-hbase4:38201] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,735 DEBUG [RS:0;jenkins-hbase4:38201] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,735 DEBUG [RS:0;jenkins-hbase4:38201] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,735 DEBUG [RS:0;jenkins-hbase4:38201] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:35,736 INFO [RS:2;jenkins-hbase4:34171] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 19:15:35,736 INFO [RS:2;jenkins-hbase4:34171] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34171,1689534935345-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,744 INFO [RS:1;jenkins-hbase4:41819] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,744 INFO [RS:1;jenkins-hbase4:41819] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,744 INFO [RS:1;jenkins-hbase4:41819] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,744 INFO [RS:1;jenkins-hbase4:41819] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,745 INFO [RS:0;jenkins-hbase4:38201] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,745 INFO [RS:0;jenkins-hbase4:38201] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,745 INFO [RS:0;jenkins-hbase4:38201] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,745 INFO [RS:0;jenkins-hbase4:38201] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,750 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:35,758 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 19:15:35,760 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/info 2023-07-16 19:15:35,760 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 19:15:35,760 INFO [RS:0;jenkins-hbase4:38201] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 19:15:35,761 INFO [RS:0;jenkins-hbase4:38201] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38201,1689534935205-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,761 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:35,761 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 19:15:35,762 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/rep_barrier 2023-07-16 19:15:35,762 INFO [RS:1;jenkins-hbase4:41819] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 19:15:35,763 INFO [RS:1;jenkins-hbase4:41819] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41819,1689534935274-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,763 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 19:15:35,763 INFO [RS:2;jenkins-hbase4:34171] regionserver.Replication(203): jenkins-hbase4.apache.org,34171,1689534935345 started 2023-07-16 19:15:35,763 INFO [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34171,1689534935345, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34171, sessionid=0x1016f8fb3430003 2023-07-16 19:15:35,763 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:35,764 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 19:15:35,765 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/table 2023-07-16 19:15:35,765 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 19:15:35,766 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:35,766 DEBUG [RS:2;jenkins-hbase4:34171] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 19:15:35,767 DEBUG [RS:2;jenkins-hbase4:34171] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34171,1689534935345 2023-07-16 19:15:35,767 DEBUG [RS:2;jenkins-hbase4:34171] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34171,1689534935345' 2023-07-16 19:15:35,767 DEBUG [RS:2;jenkins-hbase4:34171] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 19:15:35,767 DEBUG [RS:2;jenkins-hbase4:34171] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 19:15:35,767 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740 2023-07-16 19:15:35,768 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740 2023-07-16 19:15:35,768 DEBUG [RS:2;jenkins-hbase4:34171] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 19:15:35,768 DEBUG [RS:2;jenkins-hbase4:34171] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 19:15:35,768 DEBUG [RS:2;jenkins-hbase4:34171] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34171,1689534935345 2023-07-16 19:15:35,768 DEBUG [RS:2;jenkins-hbase4:34171] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34171,1689534935345' 2023-07-16 19:15:35,768 DEBUG [RS:2;jenkins-hbase4:34171] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 19:15:35,768 DEBUG [RS:2;jenkins-hbase4:34171] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 19:15:35,769 DEBUG [RS:2;jenkins-hbase4:34171] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 19:15:35,769 INFO [RS:2;jenkins-hbase4:34171] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-16 19:15:35,771 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 19:15:35,772 INFO [RS:2;jenkins-hbase4:34171] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,772 DEBUG [RS:2;jenkins-hbase4:34171] zookeeper.ZKUtil(398): regionserver:34171-0x1016f8fb3430003, quorum=127.0.0.1:56571, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-16 19:15:35,772 INFO [RS:2;jenkins-hbase4:34171] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-16 19:15:35,772 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 19:15:35,773 INFO [RS:2;jenkins-hbase4:34171] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,773 INFO [RS:2;jenkins-hbase4:34171] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,775 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:35,776 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10921896800, jitterRate=0.017180904746055603}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 19:15:35,776 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 19:15:35,776 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 19:15:35,776 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 19:15:35,776 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 19:15:35,776 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 19:15:35,776 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 19:15:35,782 INFO [RS:0;jenkins-hbase4:38201] regionserver.Replication(203): jenkins-hbase4.apache.org,38201,1689534935205 started 2023-07-16 19:15:35,782 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 19:15:35,783 INFO [RS:0;jenkins-hbase4:38201] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38201,1689534935205, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38201, sessionid=0x1016f8fb3430001 2023-07-16 19:15:35,783 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 19:15:35,783 DEBUG [RS:0;jenkins-hbase4:38201] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 19:15:35,783 DEBUG [RS:0;jenkins-hbase4:38201] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38201,1689534935205 2023-07-16 19:15:35,783 DEBUG [RS:0;jenkins-hbase4:38201] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38201,1689534935205' 2023-07-16 19:15:35,783 DEBUG [RS:0;jenkins-hbase4:38201] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 19:15:35,784 DEBUG [RS:0;jenkins-hbase4:38201] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 19:15:35,784 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 19:15:35,784 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-16 19:15:35,784 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-16 19:15:35,784 DEBUG [RS:0;jenkins-hbase4:38201] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 19:15:35,784 DEBUG [RS:0;jenkins-hbase4:38201] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 19:15:35,784 DEBUG [RS:0;jenkins-hbase4:38201] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38201,1689534935205 2023-07-16 19:15:35,784 DEBUG [RS:0;jenkins-hbase4:38201] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38201,1689534935205' 2023-07-16 19:15:35,784 DEBUG [RS:0;jenkins-hbase4:38201] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 19:15:35,785 DEBUG [RS:0;jenkins-hbase4:38201] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 19:15:35,785 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-16 19:15:35,786 DEBUG [RS:0;jenkins-hbase4:38201] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 19:15:35,786 INFO [RS:0;jenkins-hbase4:38201] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-16 19:15:35,786 INFO [RS:0;jenkins-hbase4:38201] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,786 INFO [RS:1;jenkins-hbase4:41819] regionserver.Replication(203): jenkins-hbase4.apache.org,41819,1689534935274 started 2023-07-16 19:15:35,787 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-16 19:15:35,787 INFO [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41819,1689534935274, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41819, sessionid=0x1016f8fb3430002 2023-07-16 19:15:35,787 DEBUG [RS:1;jenkins-hbase4:41819] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 19:15:35,787 DEBUG [RS:1;jenkins-hbase4:41819] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:35,787 DEBUG [RS:1;jenkins-hbase4:41819] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41819,1689534935274' 2023-07-16 19:15:35,787 DEBUG [RS:1;jenkins-hbase4:41819] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 19:15:35,787 DEBUG [RS:0;jenkins-hbase4:38201] zookeeper.ZKUtil(398): regionserver:38201-0x1016f8fb3430001, quorum=127.0.0.1:56571, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-16 19:15:35,787 INFO [RS:0;jenkins-hbase4:38201] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-16 19:15:35,787 INFO [RS:0;jenkins-hbase4:38201] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,787 INFO [RS:0;jenkins-hbase4:38201] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,787 DEBUG [RS:1;jenkins-hbase4:41819] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 19:15:35,788 DEBUG [RS:1;jenkins-hbase4:41819] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 19:15:35,788 DEBUG [RS:1;jenkins-hbase4:41819] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 19:15:35,788 DEBUG [RS:1;jenkins-hbase4:41819] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:35,788 DEBUG [RS:1;jenkins-hbase4:41819] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41819,1689534935274' 2023-07-16 19:15:35,788 DEBUG [RS:1;jenkins-hbase4:41819] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 19:15:35,788 DEBUG [RS:1;jenkins-hbase4:41819] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 19:15:35,788 DEBUG [RS:1;jenkins-hbase4:41819] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 19:15:35,788 INFO [RS:1;jenkins-hbase4:41819] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-16 19:15:35,788 INFO [RS:1;jenkins-hbase4:41819] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,789 DEBUG [RS:1;jenkins-hbase4:41819] zookeeper.ZKUtil(398): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-16 19:15:35,789 INFO [RS:1;jenkins-hbase4:41819] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-16 19:15:35,789 INFO [RS:1;jenkins-hbase4:41819] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,789 INFO [RS:1;jenkins-hbase4:41819] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:35,877 INFO [RS:2;jenkins-hbase4:34171] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34171%2C1689534935345, suffix=, logDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/WALs/jenkins-hbase4.apache.org,34171,1689534935345, archiveDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/oldWALs, maxLogs=32 2023-07-16 19:15:35,889 INFO [RS:0;jenkins-hbase4:38201] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38201%2C1689534935205, suffix=, logDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/WALs/jenkins-hbase4.apache.org,38201,1689534935205, archiveDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/oldWALs, maxLogs=32 2023-07-16 19:15:35,891 INFO [RS:1;jenkins-hbase4:41819] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41819%2C1689534935274, suffix=, logDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/WALs/jenkins-hbase4.apache.org,41819,1689534935274, archiveDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/oldWALs, maxLogs=32 2023-07-16 19:15:35,912 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39811,DS-a031fdd0-72cd-44f1-a1ac-c4325c9dd77f,DISK] 2023-07-16 19:15:35,912 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35989,DS-0d10d8a0-1d60-4e8a-ac99-a532cf359f26,DISK] 2023-07-16 19:15:35,913 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34819,DS-af6ec8ac-0c35-4ce0-9a35-4a1a157ee284,DISK] 2023-07-16 19:15:35,939 DEBUG [jenkins-hbase4:41265] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-16 19:15:35,940 DEBUG [jenkins-hbase4:41265] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:35,940 DEBUG [jenkins-hbase4:41265] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:35,940 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39811,DS-a031fdd0-72cd-44f1-a1ac-c4325c9dd77f,DISK] 2023-07-16 19:15:35,940 DEBUG [jenkins-hbase4:41265] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:35,940 DEBUG [jenkins-hbase4:41265] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:35,940 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35989,DS-0d10d8a0-1d60-4e8a-ac99-a532cf359f26,DISK] 2023-07-16 19:15:35,941 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34819,DS-af6ec8ac-0c35-4ce0-9a35-4a1a157ee284,DISK] 2023-07-16 19:15:35,940 DEBUG [jenkins-hbase4:41265] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:35,950 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34171,1689534935345, state=OPENING 2023-07-16 19:15:35,950 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35989,DS-0d10d8a0-1d60-4e8a-ac99-a532cf359f26,DISK] 2023-07-16 19:15:35,950 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34819,DS-af6ec8ac-0c35-4ce0-9a35-4a1a157ee284,DISK] 2023-07-16 19:15:35,951 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39811,DS-a031fdd0-72cd-44f1-a1ac-c4325c9dd77f,DISK] 2023-07-16 19:15:35,951 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-16 19:15:35,951 INFO [RS:2;jenkins-hbase4:34171] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/WALs/jenkins-hbase4.apache.org,34171,1689534935345/jenkins-hbase4.apache.org%2C34171%2C1689534935345.1689534935879 2023-07-16 19:15:35,955 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:35,956 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 19:15:35,961 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34171,1689534935345}] 2023-07-16 19:15:35,976 DEBUG [RS:2;jenkins-hbase4:34171] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39811,DS-a031fdd0-72cd-44f1-a1ac-c4325c9dd77f,DISK], DatanodeInfoWithStorage[127.0.0.1:35989,DS-0d10d8a0-1d60-4e8a-ac99-a532cf359f26,DISK], DatanodeInfoWithStorage[127.0.0.1:34819,DS-af6ec8ac-0c35-4ce0-9a35-4a1a157ee284,DISK]] 2023-07-16 19:15:35,977 INFO [RS:1;jenkins-hbase4:41819] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/WALs/jenkins-hbase4.apache.org,41819,1689534935274/jenkins-hbase4.apache.org%2C41819%2C1689534935274.1689534935892 2023-07-16 19:15:35,977 INFO [RS:0;jenkins-hbase4:38201] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/WALs/jenkins-hbase4.apache.org,38201,1689534935205/jenkins-hbase4.apache.org%2C38201%2C1689534935205.1689534935891 2023-07-16 19:15:35,982 DEBUG [RS:1;jenkins-hbase4:41819] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35989,DS-0d10d8a0-1d60-4e8a-ac99-a532cf359f26,DISK], DatanodeInfoWithStorage[127.0.0.1:39811,DS-a031fdd0-72cd-44f1-a1ac-c4325c9dd77f,DISK], DatanodeInfoWithStorage[127.0.0.1:34819,DS-af6ec8ac-0c35-4ce0-9a35-4a1a157ee284,DISK]] 2023-07-16 19:15:35,986 DEBUG [RS:0;jenkins-hbase4:38201] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39811,DS-a031fdd0-72cd-44f1-a1ac-c4325c9dd77f,DISK], DatanodeInfoWithStorage[127.0.0.1:35989,DS-0d10d8a0-1d60-4e8a-ac99-a532cf359f26,DISK], DatanodeInfoWithStorage[127.0.0.1:34819,DS-af6ec8ac-0c35-4ce0-9a35-4a1a157ee284,DISK]] 2023-07-16 19:15:36,131 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34171,1689534935345 2023-07-16 19:15:36,131 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 19:15:36,133 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37034, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 19:15:36,139 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-16 19:15:36,139 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:36,140 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34171%2C1689534935345.meta, suffix=.meta, logDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/WALs/jenkins-hbase4.apache.org,34171,1689534935345, archiveDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/oldWALs, maxLogs=32 2023-07-16 19:15:36,158 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35989,DS-0d10d8a0-1d60-4e8a-ac99-a532cf359f26,DISK] 2023-07-16 19:15:36,158 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39811,DS-a031fdd0-72cd-44f1-a1ac-c4325c9dd77f,DISK] 2023-07-16 19:15:36,161 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34819,DS-af6ec8ac-0c35-4ce0-9a35-4a1a157ee284,DISK] 2023-07-16 19:15:36,167 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/WALs/jenkins-hbase4.apache.org,34171,1689534935345/jenkins-hbase4.apache.org%2C34171%2C1689534935345.meta.1689534936141.meta 2023-07-16 19:15:36,167 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35989,DS-0d10d8a0-1d60-4e8a-ac99-a532cf359f26,DISK], DatanodeInfoWithStorage[127.0.0.1:34819,DS-af6ec8ac-0c35-4ce0-9a35-4a1a157ee284,DISK], DatanodeInfoWithStorage[127.0.0.1:39811,DS-a031fdd0-72cd-44f1-a1ac-c4325c9dd77f,DISK]] 2023-07-16 19:15:36,167 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:36,167 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 19:15:36,167 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-16 19:15:36,168 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-16 19:15:36,168 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-16 19:15:36,168 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:36,168 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-16 19:15:36,168 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-16 19:15:36,169 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 19:15:36,170 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/info 2023-07-16 19:15:36,170 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/info 2023-07-16 19:15:36,171 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 19:15:36,171 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:36,171 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 19:15:36,172 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/rep_barrier 2023-07-16 19:15:36,173 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/rep_barrier 2023-07-16 19:15:36,173 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 19:15:36,174 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:36,174 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 19:15:36,175 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/table 2023-07-16 19:15:36,175 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/table 2023-07-16 19:15:36,175 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 19:15:36,176 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:36,176 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740 2023-07-16 19:15:36,178 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740 2023-07-16 19:15:36,180 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 19:15:36,182 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 19:15:36,183 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11956591200, jitterRate=0.11354433000087738}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 19:15:36,183 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 19:15:36,186 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689534936131 2023-07-16 19:15:36,191 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-16 19:15:36,192 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-16 19:15:36,192 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34171,1689534935345, state=OPEN 2023-07-16 19:15:36,195 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 19:15:36,195 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 19:15:36,197 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-16 19:15:36,197 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34171,1689534935345 in 239 msec 2023-07-16 19:15:36,198 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-16 19:15:36,198 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 413 msec 2023-07-16 19:15:36,200 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 624 msec 2023-07-16 19:15:36,200 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689534936200, completionTime=-1 2023-07-16 19:15:36,200 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-16 19:15:36,201 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-16 19:15:36,204 DEBUG [hconnection-0x18100de1-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 19:15:36,205 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37038, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 19:15:36,207 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-16 19:15:36,207 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689534996207 2023-07-16 19:15:36,207 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689535056207 2023-07-16 19:15:36,207 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-07-16 19:15:36,208 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41265,1689534935134] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:36,210 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41265,1689534935134] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-16 19:15:36,211 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-16 19:15:36,214 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41265,1689534935134-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:36,214 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41265,1689534935134-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:36,214 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41265,1689534935134-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:36,214 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:41265, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:36,214 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:36,215 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-16 19:15:36,215 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:36,215 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:36,216 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 19:15:36,216 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-16 19:15:36,216 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-16 19:15:36,217 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:36,219 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 19:15:36,219 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/hbase/rsgroup/f6ee148af7855c86512358b2ddb1d812 2023-07-16 19:15:36,220 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/hbase/rsgroup/f6ee148af7855c86512358b2ddb1d812 empty. 2023-07-16 19:15:36,220 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/hbase/rsgroup/f6ee148af7855c86512358b2ddb1d812 2023-07-16 19:15:36,220 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/hbase/namespace/ef8e48b6645cd9fc03f8400b57fcf495 2023-07-16 19:15:36,220 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-16 19:15:36,221 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/hbase/namespace/ef8e48b6645cd9fc03f8400b57fcf495 empty. 2023-07-16 19:15:36,222 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/hbase/namespace/ef8e48b6645cd9fc03f8400b57fcf495 2023-07-16 19:15:36,222 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-16 19:15:36,243 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:36,247 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => f6ee148af7855c86512358b2ddb1d812, NAME => 'hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp 2023-07-16 19:15:36,248 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:36,249 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => ef8e48b6645cd9fc03f8400b57fcf495, NAME => 'hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp 2023-07-16 19:15:36,258 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:36,258 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing f6ee148af7855c86512358b2ddb1d812, disabling compactions & flushes 2023-07-16 19:15:36,258 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812. 2023-07-16 19:15:36,258 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812. 2023-07-16 19:15:36,259 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812. after waiting 0 ms 2023-07-16 19:15:36,259 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812. 2023-07-16 19:15:36,259 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812. 2023-07-16 19:15:36,259 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for f6ee148af7855c86512358b2ddb1d812: 2023-07-16 19:15:36,265 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:36,265 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing ef8e48b6645cd9fc03f8400b57fcf495, disabling compactions & flushes 2023-07-16 19:15:36,265 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495. 2023-07-16 19:15:36,265 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495. 2023-07-16 19:15:36,265 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495. after waiting 0 ms 2023-07-16 19:15:36,265 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495. 2023-07-16 19:15:36,265 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495. 2023-07-16 19:15:36,265 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for ef8e48b6645cd9fc03f8400b57fcf495: 2023-07-16 19:15:36,267 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 19:15:36,267 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 19:15:36,268 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689534936268"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534936268"}]},"ts":"1689534936268"} 2023-07-16 19:15:36,268 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689534936268"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534936268"}]},"ts":"1689534936268"} 2023-07-16 19:15:36,271 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 19:15:36,272 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 19:15:36,272 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 19:15:36,272 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534936272"}]},"ts":"1689534936272"} 2023-07-16 19:15:36,272 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 19:15:36,273 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534936273"}]},"ts":"1689534936273"} 2023-07-16 19:15:36,273 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-16 19:15:36,276 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-16 19:15:36,277 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:36,277 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:36,277 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:36,277 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:36,277 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:36,277 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ef8e48b6645cd9fc03f8400b57fcf495, ASSIGN}] 2023-07-16 19:15:36,280 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:36,280 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:36,280 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:36,280 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:36,280 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:36,280 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f6ee148af7855c86512358b2ddb1d812, ASSIGN}] 2023-07-16 19:15:36,286 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ef8e48b6645cd9fc03f8400b57fcf495, ASSIGN 2023-07-16 19:15:36,286 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f6ee148af7855c86512358b2ddb1d812, ASSIGN 2023-07-16 19:15:36,287 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=f6ee148af7855c86512358b2ddb1d812, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41819,1689534935274; forceNewPlan=false, retain=false 2023-07-16 19:15:36,287 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=ef8e48b6645cd9fc03f8400b57fcf495, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41819,1689534935274; forceNewPlan=false, retain=false 2023-07-16 19:15:36,287 INFO [jenkins-hbase4:41265] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-16 19:15:36,291 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=f6ee148af7855c86512358b2ddb1d812, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:36,291 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=ef8e48b6645cd9fc03f8400b57fcf495, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:36,291 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689534936290"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534936290"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534936290"}]},"ts":"1689534936290"} 2023-07-16 19:15:36,291 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689534936290"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534936290"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534936290"}]},"ts":"1689534936290"} 2023-07-16 19:15:36,292 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure ef8e48b6645cd9fc03f8400b57fcf495, server=jenkins-hbase4.apache.org,41819,1689534935274}] 2023-07-16 19:15:36,294 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure f6ee148af7855c86512358b2ddb1d812, server=jenkins-hbase4.apache.org,41819,1689534935274}] 2023-07-16 19:15:36,445 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:36,445 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 19:15:36,447 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56870, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 19:15:36,451 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495. 2023-07-16 19:15:36,452 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ef8e48b6645cd9fc03f8400b57fcf495, NAME => 'hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:36,452 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace ef8e48b6645cd9fc03f8400b57fcf495 2023-07-16 19:15:36,452 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:36,452 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ef8e48b6645cd9fc03f8400b57fcf495 2023-07-16 19:15:36,452 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ef8e48b6645cd9fc03f8400b57fcf495 2023-07-16 19:15:36,453 INFO [StoreOpener-ef8e48b6645cd9fc03f8400b57fcf495-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region ef8e48b6645cd9fc03f8400b57fcf495 2023-07-16 19:15:36,455 DEBUG [StoreOpener-ef8e48b6645cd9fc03f8400b57fcf495-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/namespace/ef8e48b6645cd9fc03f8400b57fcf495/info 2023-07-16 19:15:36,455 DEBUG [StoreOpener-ef8e48b6645cd9fc03f8400b57fcf495-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/namespace/ef8e48b6645cd9fc03f8400b57fcf495/info 2023-07-16 19:15:36,455 INFO [StoreOpener-ef8e48b6645cd9fc03f8400b57fcf495-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ef8e48b6645cd9fc03f8400b57fcf495 columnFamilyName info 2023-07-16 19:15:36,456 INFO [StoreOpener-ef8e48b6645cd9fc03f8400b57fcf495-1] regionserver.HStore(310): Store=ef8e48b6645cd9fc03f8400b57fcf495/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:36,457 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/namespace/ef8e48b6645cd9fc03f8400b57fcf495 2023-07-16 19:15:36,457 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/namespace/ef8e48b6645cd9fc03f8400b57fcf495 2023-07-16 19:15:36,460 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ef8e48b6645cd9fc03f8400b57fcf495 2023-07-16 19:15:36,463 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/namespace/ef8e48b6645cd9fc03f8400b57fcf495/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:36,463 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ef8e48b6645cd9fc03f8400b57fcf495; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10815271520, jitterRate=0.007250651717185974}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:36,463 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ef8e48b6645cd9fc03f8400b57fcf495: 2023-07-16 19:15:36,464 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495., pid=8, masterSystemTime=1689534936445 2023-07-16 19:15:36,472 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495. 2023-07-16 19:15:36,473 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495. 2023-07-16 19:15:36,473 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812. 2023-07-16 19:15:36,473 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f6ee148af7855c86512358b2ddb1d812, NAME => 'hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:36,473 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 19:15:36,473 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812. service=MultiRowMutationService 2023-07-16 19:15:36,473 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-16 19:15:36,474 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup f6ee148af7855c86512358b2ddb1d812 2023-07-16 19:15:36,474 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:36,474 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f6ee148af7855c86512358b2ddb1d812 2023-07-16 19:15:36,474 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f6ee148af7855c86512358b2ddb1d812 2023-07-16 19:15:36,475 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=ef8e48b6645cd9fc03f8400b57fcf495, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:36,475 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689534936475"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534936475"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534936475"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534936475"}]},"ts":"1689534936475"} 2023-07-16 19:15:36,477 INFO [StoreOpener-f6ee148af7855c86512358b2ddb1d812-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region f6ee148af7855c86512358b2ddb1d812 2023-07-16 19:15:36,478 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-16 19:15:36,478 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure ef8e48b6645cd9fc03f8400b57fcf495, server=jenkins-hbase4.apache.org,41819,1689534935274 in 184 msec 2023-07-16 19:15:36,478 DEBUG [StoreOpener-f6ee148af7855c86512358b2ddb1d812-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/rsgroup/f6ee148af7855c86512358b2ddb1d812/m 2023-07-16 19:15:36,478 DEBUG [StoreOpener-f6ee148af7855c86512358b2ddb1d812-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/rsgroup/f6ee148af7855c86512358b2ddb1d812/m 2023-07-16 19:15:36,479 INFO [StoreOpener-f6ee148af7855c86512358b2ddb1d812-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f6ee148af7855c86512358b2ddb1d812 columnFamilyName m 2023-07-16 19:15:36,480 INFO [StoreOpener-f6ee148af7855c86512358b2ddb1d812-1] regionserver.HStore(310): Store=f6ee148af7855c86512358b2ddb1d812/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:36,480 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-16 19:15:36,480 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=ef8e48b6645cd9fc03f8400b57fcf495, ASSIGN in 201 msec 2023-07-16 19:15:36,481 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 19:15:36,481 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/rsgroup/f6ee148af7855c86512358b2ddb1d812 2023-07-16 19:15:36,481 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534936481"}]},"ts":"1689534936481"} 2023-07-16 19:15:36,481 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/rsgroup/f6ee148af7855c86512358b2ddb1d812 2023-07-16 19:15:36,483 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-16 19:15:36,485 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 19:15:36,485 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f6ee148af7855c86512358b2ddb1d812 2023-07-16 19:15:36,487 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 270 msec 2023-07-16 19:15:36,488 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/rsgroup/f6ee148af7855c86512358b2ddb1d812/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:36,488 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f6ee148af7855c86512358b2ddb1d812; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@63afa703, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:36,488 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f6ee148af7855c86512358b2ddb1d812: 2023-07-16 19:15:36,489 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812., pid=9, masterSystemTime=1689534936445 2023-07-16 19:15:36,490 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812. 2023-07-16 19:15:36,490 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812. 2023-07-16 19:15:36,490 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=f6ee148af7855c86512358b2ddb1d812, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:36,490 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689534936490"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534936490"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534936490"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534936490"}]},"ts":"1689534936490"} 2023-07-16 19:15:36,493 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-16 19:15:36,493 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure f6ee148af7855c86512358b2ddb1d812, server=jenkins-hbase4.apache.org,41819,1689534935274 in 197 msec 2023-07-16 19:15:36,495 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-16 19:15:36,495 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=f6ee148af7855c86512358b2ddb1d812, ASSIGN in 213 msec 2023-07-16 19:15:36,496 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 19:15:36,496 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534936496"}]},"ts":"1689534936496"} 2023-07-16 19:15:36,497 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-16 19:15:36,499 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 19:15:36,500 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 291 msec 2023-07-16 19:15:36,514 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41265,1689534935134] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 19:15:36,516 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56874, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 19:15:36,519 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-16 19:15:36,519 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41265,1689534935134] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-16 19:15:36,520 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41265,1689534935134] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-16 19:15:36,521 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-16 19:15:36,521 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:36,525 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-16 19:15:36,529 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:36,529 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41265,1689534935134] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:36,530 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41265,1689534935134] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 19:15:36,532 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41265,1689534935134] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-16 19:15:36,532 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 19:15:36,536 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-07-16 19:15:36,546 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-16 19:15:36,552 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 19:15:36,555 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 8 msec 2023-07-16 19:15:36,561 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-16 19:15:36,563 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-16 19:15:36,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.163sec 2023-07-16 19:15:36,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-16 19:15:36,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:36,564 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-16 19:15:36,564 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-16 19:15:36,566 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:36,566 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 19:15:36,567 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-16 19:15:36,568 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/hbase/quota/cf3b7d987701b941458f77ce1508341a 2023-07-16 19:15:36,568 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/hbase/quota/cf3b7d987701b941458f77ce1508341a empty. 2023-07-16 19:15:36,569 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/hbase/quota/cf3b7d987701b941458f77ce1508341a 2023-07-16 19:15:36,569 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-16 19:15:36,573 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-16 19:15:36,573 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-16 19:15:36,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:36,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:36,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-16 19:15:36,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-16 19:15:36,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41265,1689534935134-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-16 19:15:36,577 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41265,1689534935134-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-16 19:15:36,577 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-16 19:15:36,582 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:36,583 DEBUG [Listener at localhost/36007] zookeeper.ReadOnlyZKClient(139): Connect 0x52ea318a to 127.0.0.1:56571 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:36,585 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => cf3b7d987701b941458f77ce1508341a, NAME => 'hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp 2023-07-16 19:15:36,588 DEBUG [Listener at localhost/36007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@18aff7c6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:36,591 DEBUG [hconnection-0x9f33695-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 19:15:36,593 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37046, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 19:15:36,594 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,41265,1689534935134 2023-07-16 19:15:36,595 INFO [Listener at localhost/36007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:36,595 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:36,597 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing cf3b7d987701b941458f77ce1508341a, disabling compactions & flushes 2023-07-16 19:15:36,597 DEBUG [Listener at localhost/36007] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-16 19:15:36,597 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a. 2023-07-16 19:15:36,597 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a. 2023-07-16 19:15:36,598 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a. after waiting 0 ms 2023-07-16 19:15:36,598 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a. 2023-07-16 19:15:36,598 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a. 2023-07-16 19:15:36,598 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for cf3b7d987701b941458f77ce1508341a: 2023-07-16 19:15:36,599 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41316, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-16 19:15:36,600 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 19:15:36,602 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689534936602"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534936602"}]},"ts":"1689534936602"} 2023-07-16 19:15:36,602 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-16 19:15:36,602 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:36,603 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 19:15:36,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-16 19:15:36,604 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 19:15:36,604 DEBUG [Listener at localhost/36007] zookeeper.ReadOnlyZKClient(139): Connect 0x5ea8809c to 127.0.0.1:56571 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:36,604 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534936604"}]},"ts":"1689534936604"} 2023-07-16 19:15:36,607 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-16 19:15:36,609 DEBUG [Listener at localhost/36007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4e7f4242, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:36,609 INFO [Listener at localhost/36007] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:56571 2023-07-16 19:15:36,611 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:36,612 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:36,612 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:36,612 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:36,612 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:36,612 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=cf3b7d987701b941458f77ce1508341a, ASSIGN}] 2023-07-16 19:15:36,612 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:36,613 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1016f8fb343000a connected 2023-07-16 19:15:36,614 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=cf3b7d987701b941458f77ce1508341a, ASSIGN 2023-07-16 19:15:36,617 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=cf3b7d987701b941458f77ce1508341a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34171,1689534935345; forceNewPlan=false, retain=false 2023-07-16 19:15:36,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-16 19:15:36,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-16 19:15:36,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-16 19:15:36,633 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 19:15:36,637 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 18 msec 2023-07-16 19:15:36,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-16 19:15:36,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:36,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-16 19:15:36,734 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:36,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-16 19:15:36,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-16 19:15:36,736 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:36,737 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 19:15:36,739 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 19:15:36,740 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/np1/table1/e4a62679c2d57040286c43a2d8dc12a8 2023-07-16 19:15:36,741 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/np1/table1/e4a62679c2d57040286c43a2d8dc12a8 empty. 2023-07-16 19:15:36,741 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/np1/table1/e4a62679c2d57040286c43a2d8dc12a8 2023-07-16 19:15:36,741 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-16 19:15:36,755 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:36,757 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => e4a62679c2d57040286c43a2d8dc12a8, NAME => 'np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp 2023-07-16 19:15:36,766 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:36,766 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing e4a62679c2d57040286c43a2d8dc12a8, disabling compactions & flushes 2023-07-16 19:15:36,766 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8. 2023-07-16 19:15:36,766 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8. 2023-07-16 19:15:36,766 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8. after waiting 0 ms 2023-07-16 19:15:36,766 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8. 2023-07-16 19:15:36,766 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8. 2023-07-16 19:15:36,766 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for e4a62679c2d57040286c43a2d8dc12a8: 2023-07-16 19:15:36,767 INFO [jenkins-hbase4:41265] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 19:15:36,769 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=cf3b7d987701b941458f77ce1508341a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34171,1689534935345 2023-07-16 19:15:36,769 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689534936769"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534936769"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534936769"}]},"ts":"1689534936769"} 2023-07-16 19:15:36,769 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 19:15:36,770 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689534936770"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534936770"}]},"ts":"1689534936770"} 2023-07-16 19:15:36,770 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure cf3b7d987701b941458f77ce1508341a, server=jenkins-hbase4.apache.org,34171,1689534935345}] 2023-07-16 19:15:36,772 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 19:15:36,773 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 19:15:36,773 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534936773"}]},"ts":"1689534936773"} 2023-07-16 19:15:36,775 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-16 19:15:36,779 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:36,779 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:36,779 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:36,779 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:36,779 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:36,779 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=e4a62679c2d57040286c43a2d8dc12a8, ASSIGN}] 2023-07-16 19:15:36,780 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=e4a62679c2d57040286c43a2d8dc12a8, ASSIGN 2023-07-16 19:15:36,781 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=e4a62679c2d57040286c43a2d8dc12a8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41819,1689534935274; forceNewPlan=false, retain=false 2023-07-16 19:15:36,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-16 19:15:36,927 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a. 2023-07-16 19:15:36,928 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cf3b7d987701b941458f77ce1508341a, NAME => 'hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:36,928 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota cf3b7d987701b941458f77ce1508341a 2023-07-16 19:15:36,928 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:36,928 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cf3b7d987701b941458f77ce1508341a 2023-07-16 19:15:36,928 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cf3b7d987701b941458f77ce1508341a 2023-07-16 19:15:36,929 INFO [StoreOpener-cf3b7d987701b941458f77ce1508341a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region cf3b7d987701b941458f77ce1508341a 2023-07-16 19:15:36,931 DEBUG [StoreOpener-cf3b7d987701b941458f77ce1508341a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/quota/cf3b7d987701b941458f77ce1508341a/q 2023-07-16 19:15:36,931 DEBUG [StoreOpener-cf3b7d987701b941458f77ce1508341a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/quota/cf3b7d987701b941458f77ce1508341a/q 2023-07-16 19:15:36,931 INFO [StoreOpener-cf3b7d987701b941458f77ce1508341a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cf3b7d987701b941458f77ce1508341a columnFamilyName q 2023-07-16 19:15:36,932 INFO [jenkins-hbase4:41265] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 19:15:36,933 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=e4a62679c2d57040286c43a2d8dc12a8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:36,933 INFO [StoreOpener-cf3b7d987701b941458f77ce1508341a-1] regionserver.HStore(310): Store=cf3b7d987701b941458f77ce1508341a/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:36,933 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689534936933"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534936933"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534936933"}]},"ts":"1689534936933"} 2023-07-16 19:15:36,933 INFO [StoreOpener-cf3b7d987701b941458f77ce1508341a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region cf3b7d987701b941458f77ce1508341a 2023-07-16 19:15:36,935 DEBUG [StoreOpener-cf3b7d987701b941458f77ce1508341a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/quota/cf3b7d987701b941458f77ce1508341a/u 2023-07-16 19:15:36,935 DEBUG [StoreOpener-cf3b7d987701b941458f77ce1508341a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/quota/cf3b7d987701b941458f77ce1508341a/u 2023-07-16 19:15:36,935 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure e4a62679c2d57040286c43a2d8dc12a8, server=jenkins-hbase4.apache.org,41819,1689534935274}] 2023-07-16 19:15:36,935 INFO [StoreOpener-cf3b7d987701b941458f77ce1508341a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cf3b7d987701b941458f77ce1508341a columnFamilyName u 2023-07-16 19:15:36,936 INFO [StoreOpener-cf3b7d987701b941458f77ce1508341a-1] regionserver.HStore(310): Store=cf3b7d987701b941458f77ce1508341a/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:36,937 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/quota/cf3b7d987701b941458f77ce1508341a 2023-07-16 19:15:36,937 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/quota/cf3b7d987701b941458f77ce1508341a 2023-07-16 19:15:36,939 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-16 19:15:36,940 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cf3b7d987701b941458f77ce1508341a 2023-07-16 19:15:36,942 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/quota/cf3b7d987701b941458f77ce1508341a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:36,943 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cf3b7d987701b941458f77ce1508341a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10264585440, jitterRate=-0.04403598606586456}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-16 19:15:36,943 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cf3b7d987701b941458f77ce1508341a: 2023-07-16 19:15:36,943 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a., pid=16, masterSystemTime=1689534936923 2023-07-16 19:15:36,945 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a. 2023-07-16 19:15:36,945 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a. 2023-07-16 19:15:36,945 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=cf3b7d987701b941458f77ce1508341a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34171,1689534935345 2023-07-16 19:15:36,945 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689534936945"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534936945"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534936945"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534936945"}]},"ts":"1689534936945"} 2023-07-16 19:15:36,949 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-16 19:15:36,949 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure cf3b7d987701b941458f77ce1508341a, server=jenkins-hbase4.apache.org,34171,1689534935345 in 177 msec 2023-07-16 19:15:36,951 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-16 19:15:36,951 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=cf3b7d987701b941458f77ce1508341a, ASSIGN in 337 msec 2023-07-16 19:15:36,951 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 19:15:36,952 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534936952"}]},"ts":"1689534936952"} 2023-07-16 19:15:36,953 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-16 19:15:36,954 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 19:15:36,956 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 392 msec 2023-07-16 19:15:37,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-16 19:15:37,090 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8. 2023-07-16 19:15:37,090 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e4a62679c2d57040286c43a2d8dc12a8, NAME => 'np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:37,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 e4a62679c2d57040286c43a2d8dc12a8 2023-07-16 19:15:37,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:37,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e4a62679c2d57040286c43a2d8dc12a8 2023-07-16 19:15:37,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e4a62679c2d57040286c43a2d8dc12a8 2023-07-16 19:15:37,097 INFO [StoreOpener-e4a62679c2d57040286c43a2d8dc12a8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region e4a62679c2d57040286c43a2d8dc12a8 2023-07-16 19:15:37,098 DEBUG [StoreOpener-e4a62679c2d57040286c43a2d8dc12a8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/np1/table1/e4a62679c2d57040286c43a2d8dc12a8/fam1 2023-07-16 19:15:37,098 DEBUG [StoreOpener-e4a62679c2d57040286c43a2d8dc12a8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/np1/table1/e4a62679c2d57040286c43a2d8dc12a8/fam1 2023-07-16 19:15:37,098 INFO [StoreOpener-e4a62679c2d57040286c43a2d8dc12a8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e4a62679c2d57040286c43a2d8dc12a8 columnFamilyName fam1 2023-07-16 19:15:37,099 INFO [StoreOpener-e4a62679c2d57040286c43a2d8dc12a8-1] regionserver.HStore(310): Store=e4a62679c2d57040286c43a2d8dc12a8/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:37,100 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/np1/table1/e4a62679c2d57040286c43a2d8dc12a8 2023-07-16 19:15:37,100 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/np1/table1/e4a62679c2d57040286c43a2d8dc12a8 2023-07-16 19:15:37,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e4a62679c2d57040286c43a2d8dc12a8 2023-07-16 19:15:37,105 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/np1/table1/e4a62679c2d57040286c43a2d8dc12a8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:37,106 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e4a62679c2d57040286c43a2d8dc12a8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11372718720, jitterRate=0.05916696786880493}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:37,106 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e4a62679c2d57040286c43a2d8dc12a8: 2023-07-16 19:15:37,107 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8., pid=18, masterSystemTime=1689534937086 2023-07-16 19:15:37,108 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8. 2023-07-16 19:15:37,108 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8. 2023-07-16 19:15:37,109 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=e4a62679c2d57040286c43a2d8dc12a8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:37,109 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689534937109"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534937109"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534937109"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534937109"}]},"ts":"1689534937109"} 2023-07-16 19:15:37,111 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-16 19:15:37,111 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure e4a62679c2d57040286c43a2d8dc12a8, server=jenkins-hbase4.apache.org,41819,1689534935274 in 175 msec 2023-07-16 19:15:37,113 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-16 19:15:37,113 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=e4a62679c2d57040286c43a2d8dc12a8, ASSIGN in 332 msec 2023-07-16 19:15:37,115 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 19:15:37,115 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534937115"}]},"ts":"1689534937115"} 2023-07-16 19:15:37,116 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-16 19:15:37,118 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 19:15:37,120 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 388 msec 2023-07-16 19:15:37,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-16 19:15:37,339 INFO [Listener at localhost/36007] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-16 19:15:37,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:37,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-16 19:15:37,343 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:37,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-16 19:15:37,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-16 19:15:37,362 INFO [PEWorker-3] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=20 msec 2023-07-16 19:15:37,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-16 19:15:37,448 INFO [Listener at localhost/36007] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-16 19:15:37,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:37,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:37,450 INFO [Listener at localhost/36007] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-16 19:15:37,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-16 19:15:37,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-16 19:15:37,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 19:15:37,453 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534937453"}]},"ts":"1689534937453"} 2023-07-16 19:15:37,454 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-16 19:15:37,456 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-16 19:15:37,456 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=e4a62679c2d57040286c43a2d8dc12a8, UNASSIGN}] 2023-07-16 19:15:37,457 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=e4a62679c2d57040286c43a2d8dc12a8, UNASSIGN 2023-07-16 19:15:37,458 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=e4a62679c2d57040286c43a2d8dc12a8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:37,458 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689534937458"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534937458"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534937458"}]},"ts":"1689534937458"} 2023-07-16 19:15:37,459 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure e4a62679c2d57040286c43a2d8dc12a8, server=jenkins-hbase4.apache.org,41819,1689534935274}] 2023-07-16 19:15:37,554 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 19:15:37,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e4a62679c2d57040286c43a2d8dc12a8 2023-07-16 19:15:37,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e4a62679c2d57040286c43a2d8dc12a8, disabling compactions & flushes 2023-07-16 19:15:37,612 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8. 2023-07-16 19:15:37,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8. 2023-07-16 19:15:37,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8. after waiting 0 ms 2023-07-16 19:15:37,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8. 2023-07-16 19:15:37,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/np1/table1/e4a62679c2d57040286c43a2d8dc12a8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:37,616 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8. 2023-07-16 19:15:37,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e4a62679c2d57040286c43a2d8dc12a8: 2023-07-16 19:15:37,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e4a62679c2d57040286c43a2d8dc12a8 2023-07-16 19:15:37,618 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=e4a62679c2d57040286c43a2d8dc12a8, regionState=CLOSED 2023-07-16 19:15:37,618 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689534937618"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534937618"}]},"ts":"1689534937618"} 2023-07-16 19:15:37,621 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-16 19:15:37,621 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure e4a62679c2d57040286c43a2d8dc12a8, server=jenkins-hbase4.apache.org,41819,1689534935274 in 160 msec 2023-07-16 19:15:37,622 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-16 19:15:37,622 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=e4a62679c2d57040286c43a2d8dc12a8, UNASSIGN in 165 msec 2023-07-16 19:15:37,622 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534937622"}]},"ts":"1689534937622"} 2023-07-16 19:15:37,623 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-16 19:15:37,626 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-16 19:15:37,674 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 176 msec 2023-07-16 19:15:37,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 19:15:37,755 INFO [Listener at localhost/36007] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-16 19:15:37,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-16 19:15:37,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-16 19:15:37,758 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 19:15:37,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-16 19:15:37,759 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 19:15:37,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:37,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 19:15:37,762 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/np1/table1/e4a62679c2d57040286c43a2d8dc12a8 2023-07-16 19:15:37,764 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/np1/table1/e4a62679c2d57040286c43a2d8dc12a8/fam1, FileablePath, hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/np1/table1/e4a62679c2d57040286c43a2d8dc12a8/recovered.edits] 2023-07-16 19:15:37,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-16 19:15:37,769 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/np1/table1/e4a62679c2d57040286c43a2d8dc12a8/recovered.edits/4.seqid to hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/archive/data/np1/table1/e4a62679c2d57040286c43a2d8dc12a8/recovered.edits/4.seqid 2023-07-16 19:15:37,769 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/.tmp/data/np1/table1/e4a62679c2d57040286c43a2d8dc12a8 2023-07-16 19:15:37,769 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-16 19:15:37,771 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 19:15:37,773 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-16 19:15:37,775 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-16 19:15:37,776 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 19:15:37,776 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-16 19:15:37,776 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534937776"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:37,777 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 19:15:37,777 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e4a62679c2d57040286c43a2d8dc12a8, NAME => 'np1:table1,,1689534936730.e4a62679c2d57040286c43a2d8dc12a8.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 19:15:37,777 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-16 19:15:37,777 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689534937777"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:37,778 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-16 19:15:37,781 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 19:15:37,782 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 25 msec 2023-07-16 19:15:37,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-16 19:15:37,866 INFO [Listener at localhost/36007] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-16 19:15:37,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-16 19:15:37,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-16 19:15:37,879 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 19:15:37,882 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 19:15:37,884 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 19:15:37,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-16 19:15:37,885 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-16 19:15:37,885 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 19:15:37,886 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 19:15:37,887 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 19:15:37,888 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 16 msec 2023-07-16 19:15:37,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41265] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-16 19:15:37,986 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-16 19:15:37,986 INFO [Listener at localhost/36007] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-16 19:15:37,986 DEBUG [Listener at localhost/36007] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x52ea318a to 127.0.0.1:56571 2023-07-16 19:15:37,987 DEBUG [Listener at localhost/36007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:37,987 DEBUG [Listener at localhost/36007] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-16 19:15:37,987 DEBUG [Listener at localhost/36007] util.JVMClusterUtil(257): Found active master hash=294655290, stopped=false 2023-07-16 19:15:37,988 DEBUG [Listener at localhost/36007] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 19:15:37,988 DEBUG [Listener at localhost/36007] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 19:15:37,988 DEBUG [Listener at localhost/36007] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-16 19:15:37,988 INFO [Listener at localhost/36007] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,41265,1689534935134 2023-07-16 19:15:37,991 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:38201-0x1016f8fb3430001, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:37,991 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:37,991 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:34171-0x1016f8fb3430003, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:37,991 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:37,991 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:37,991 INFO [Listener at localhost/36007] procedure2.ProcedureExecutor(629): Stopping 2023-07-16 19:15:37,993 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34171-0x1016f8fb3430003, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:37,993 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38201-0x1016f8fb3430001, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:37,993 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:37,994 INFO [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(1064): Closing user regions 2023-07-16 19:15:37,994 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:37,994 DEBUG [Listener at localhost/36007] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2d4ace27 to 127.0.0.1:56571 2023-07-16 19:15:37,994 INFO [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(3305): Received CLOSE for cf3b7d987701b941458f77ce1508341a 2023-07-16 19:15:37,994 DEBUG [Listener at localhost/36007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:37,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cf3b7d987701b941458f77ce1508341a, disabling compactions & flushes 2023-07-16 19:15:37,995 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a. 2023-07-16 19:15:37,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a. 2023-07-16 19:15:37,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a. after waiting 0 ms 2023-07-16 19:15:37,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a. 2023-07-16 19:15:37,996 INFO [Listener at localhost/36007] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38201,1689534935205' ***** 2023-07-16 19:15:37,996 INFO [Listener at localhost/36007] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 19:15:37,996 INFO [Listener at localhost/36007] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41819,1689534935274' ***** 2023-07-16 19:15:37,996 INFO [RS:0;jenkins-hbase4:38201] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 19:15:37,996 INFO [Listener at localhost/36007] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 19:15:37,996 INFO [Listener at localhost/36007] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34171,1689534935345' ***** 2023-07-16 19:15:37,997 INFO [Listener at localhost/36007] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 19:15:37,996 INFO [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 19:15:37,999 INFO [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 19:15:38,009 INFO [RS:0;jenkins-hbase4:38201] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2d6aa7c8{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:38,010 INFO [RS:1;jenkins-hbase4:41819] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7e287aab{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:38,010 INFO [RS:2;jenkins-hbase4:34171] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2492ed2c{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:38,011 INFO [RS:2;jenkins-hbase4:34171] server.AbstractConnector(383): Stopped ServerConnector@7947507{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:38,011 INFO [RS:1;jenkins-hbase4:41819] server.AbstractConnector(383): Stopped ServerConnector@535d47bf{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:38,011 INFO [RS:0;jenkins-hbase4:38201] server.AbstractConnector(383): Stopped ServerConnector@64edd7cb{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:38,011 INFO [RS:1;jenkins-hbase4:41819] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 19:15:38,011 INFO [RS:2;jenkins-hbase4:34171] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 19:15:38,011 INFO [RS:0;jenkins-hbase4:38201] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 19:15:38,012 INFO [RS:1;jenkins-hbase4:41819] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@51764913{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 19:15:38,014 INFO [RS:2;jenkins-hbase4:34171] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2689a462{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 19:15:38,014 INFO [RS:1;jenkins-hbase4:41819] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@79be72f9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/hadoop.log.dir/,STOPPED} 2023-07-16 19:15:38,014 INFO [RS:2;jenkins-hbase4:34171] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@37a3ac68{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/hadoop.log.dir/,STOPPED} 2023-07-16 19:15:38,014 INFO [RS:0;jenkins-hbase4:38201] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@79cb8f96{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 19:15:38,014 INFO [RS:0;jenkins-hbase4:38201] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@160ee4bd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/hadoop.log.dir/,STOPPED} 2023-07-16 19:15:38,015 INFO [RS:1;jenkins-hbase4:41819] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 19:15:38,015 INFO [RS:1;jenkins-hbase4:41819] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 19:15:38,015 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 19:15:38,015 INFO [RS:2;jenkins-hbase4:34171] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 19:15:38,019 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 19:15:38,019 INFO [RS:2;jenkins-hbase4:34171] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 19:15:38,019 INFO [RS:2;jenkins-hbase4:34171] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 19:15:38,019 INFO [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34171,1689534935345 2023-07-16 19:15:38,019 DEBUG [RS:2;jenkins-hbase4:34171] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x50face39 to 127.0.0.1:56571 2023-07-16 19:15:38,019 DEBUG [RS:2;jenkins-hbase4:34171] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:38,019 INFO [RS:2;jenkins-hbase4:34171] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 19:15:38,015 INFO [RS:1;jenkins-hbase4:41819] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 19:15:38,019 INFO [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(3305): Received CLOSE for ef8e48b6645cd9fc03f8400b57fcf495 2023-07-16 19:15:38,019 INFO [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(3305): Received CLOSE for f6ee148af7855c86512358b2ddb1d812 2023-07-16 19:15:38,019 INFO [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:38,019 DEBUG [RS:1;jenkins-hbase4:41819] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2f7102fa to 127.0.0.1:56571 2023-07-16 19:15:38,020 DEBUG [RS:1;jenkins-hbase4:41819] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:38,020 INFO [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-16 19:15:38,020 DEBUG [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(1478): Online Regions={ef8e48b6645cd9fc03f8400b57fcf495=hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495., f6ee148af7855c86512358b2ddb1d812=hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812.} 2023-07-16 19:15:38,020 INFO [RS:2;jenkins-hbase4:34171] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 19:15:38,020 INFO [RS:2;jenkins-hbase4:34171] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 19:15:38,020 INFO [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-16 19:15:38,020 DEBUG [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(1504): Waiting on ef8e48b6645cd9fc03f8400b57fcf495, f6ee148af7855c86512358b2ddb1d812 2023-07-16 19:15:38,021 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:38,023 INFO [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-16 19:15:38,023 DEBUG [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(1478): Online Regions={cf3b7d987701b941458f77ce1508341a=hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a., 1588230740=hbase:meta,,1.1588230740} 2023-07-16 19:15:38,023 DEBUG [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(1504): Waiting on 1588230740, cf3b7d987701b941458f77ce1508341a 2023-07-16 19:15:38,026 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ef8e48b6645cd9fc03f8400b57fcf495, disabling compactions & flushes 2023-07-16 19:15:38,026 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 19:15:38,026 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495. 2023-07-16 19:15:38,026 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 19:15:38,026 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 19:15:38,026 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 19:15:38,026 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 19:15:38,026 INFO [RS:0;jenkins-hbase4:38201] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 19:15:38,026 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495. 2023-07-16 19:15:38,026 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-16 19:15:38,026 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495. after waiting 0 ms 2023-07-16 19:15:38,026 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495. 2023-07-16 19:15:38,026 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing ef8e48b6645cd9fc03f8400b57fcf495 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-16 19:15:38,026 INFO [RS:0;jenkins-hbase4:38201] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 19:15:38,026 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 19:15:38,027 INFO [RS:0;jenkins-hbase4:38201] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 19:15:38,027 INFO [RS:0;jenkins-hbase4:38201] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38201,1689534935205 2023-07-16 19:15:38,027 DEBUG [RS:0;jenkins-hbase4:38201] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4b732aae to 127.0.0.1:56571 2023-07-16 19:15:38,028 DEBUG [RS:0;jenkins-hbase4:38201] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:38,028 INFO [RS:0;jenkins-hbase4:38201] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38201,1689534935205; all regions closed. 2023-07-16 19:15:38,028 DEBUG [RS:0;jenkins-hbase4:38201] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-16 19:15:38,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/quota/cf3b7d987701b941458f77ce1508341a/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:38,029 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a. 2023-07-16 19:15:38,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cf3b7d987701b941458f77ce1508341a: 2023-07-16 19:15:38,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689534936563.cf3b7d987701b941458f77ce1508341a. 2023-07-16 19:15:38,038 DEBUG [RS:0;jenkins-hbase4:38201] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/oldWALs 2023-07-16 19:15:38,038 INFO [RS:0;jenkins-hbase4:38201] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38201%2C1689534935205:(num 1689534935891) 2023-07-16 19:15:38,038 DEBUG [RS:0;jenkins-hbase4:38201] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:38,038 INFO [RS:0;jenkins-hbase4:38201] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:38,039 INFO [RS:0;jenkins-hbase4:38201] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 19:15:38,039 INFO [RS:0;jenkins-hbase4:38201] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 19:15:38,039 INFO [RS:0;jenkins-hbase4:38201] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 19:15:38,039 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 19:15:38,039 INFO [RS:0;jenkins-hbase4:38201] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 19:15:38,040 INFO [RS:0;jenkins-hbase4:38201] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38201 2023-07-16 19:15:38,059 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/.tmp/info/2d7733c7634f462b9a665beb1185d913 2023-07-16 19:15:38,063 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/namespace/ef8e48b6645cd9fc03f8400b57fcf495/.tmp/info/dacd340f2e1d4273902c2d838e5f7a56 2023-07-16 19:15:38,068 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2d7733c7634f462b9a665beb1185d913 2023-07-16 19:15:38,070 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dacd340f2e1d4273902c2d838e5f7a56 2023-07-16 19:15:38,071 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/namespace/ef8e48b6645cd9fc03f8400b57fcf495/.tmp/info/dacd340f2e1d4273902c2d838e5f7a56 as hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/namespace/ef8e48b6645cd9fc03f8400b57fcf495/info/dacd340f2e1d4273902c2d838e5f7a56 2023-07-16 19:15:38,071 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:38,071 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:38,080 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dacd340f2e1d4273902c2d838e5f7a56 2023-07-16 19:15:38,080 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/namespace/ef8e48b6645cd9fc03f8400b57fcf495/info/dacd340f2e1d4273902c2d838e5f7a56, entries=3, sequenceid=8, filesize=5.0 K 2023-07-16 19:15:38,082 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for ef8e48b6645cd9fc03f8400b57fcf495 in 56ms, sequenceid=8, compaction requested=false 2023-07-16 19:15:38,095 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/.tmp/rep_barrier/b101e947353441278bc4bdd6900d28fe 2023-07-16 19:15:38,098 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/namespace/ef8e48b6645cd9fc03f8400b57fcf495/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-16 19:15:38,101 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:34171-0x1016f8fb3430003, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38201,1689534935205 2023-07-16 19:15:38,101 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:34171-0x1016f8fb3430003, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:38,101 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:38,101 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38201,1689534935205 2023-07-16 19:15:38,101 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495. 2023-07-16 19:15:38,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ef8e48b6645cd9fc03f8400b57fcf495: 2023-07-16 19:15:38,101 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:38,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689534936215.ef8e48b6645cd9fc03f8400b57fcf495. 2023-07-16 19:15:38,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f6ee148af7855c86512358b2ddb1d812, disabling compactions & flushes 2023-07-16 19:15:38,102 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:38201-0x1016f8fb3430001, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38201,1689534935205 2023-07-16 19:15:38,102 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:38201-0x1016f8fb3430001, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:38,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812. 2023-07-16 19:15:38,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812. 2023-07-16 19:15:38,102 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38201,1689534935205] 2023-07-16 19:15:38,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812. after waiting 0 ms 2023-07-16 19:15:38,102 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38201,1689534935205; numProcessing=1 2023-07-16 19:15:38,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812. 2023-07-16 19:15:38,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing f6ee148af7855c86512358b2ddb1d812 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-16 19:15:38,104 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38201,1689534935205 already deleted, retry=false 2023-07-16 19:15:38,104 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38201,1689534935205 expired; onlineServers=2 2023-07-16 19:15:38,112 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b101e947353441278bc4bdd6900d28fe 2023-07-16 19:15:38,141 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/.tmp/table/c35f20f1f0564ec38b0fd5e5e53e21cf 2023-07-16 19:15:38,149 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c35f20f1f0564ec38b0fd5e5e53e21cf 2023-07-16 19:15:38,149 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/.tmp/info/2d7733c7634f462b9a665beb1185d913 as hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/info/2d7733c7634f462b9a665beb1185d913 2023-07-16 19:15:38,155 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2d7733c7634f462b9a665beb1185d913 2023-07-16 19:15:38,155 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/info/2d7733c7634f462b9a665beb1185d913, entries=32, sequenceid=31, filesize=8.5 K 2023-07-16 19:15:38,156 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/.tmp/rep_barrier/b101e947353441278bc4bdd6900d28fe as hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/rep_barrier/b101e947353441278bc4bdd6900d28fe 2023-07-16 19:15:38,162 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b101e947353441278bc4bdd6900d28fe 2023-07-16 19:15:38,163 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/rep_barrier/b101e947353441278bc4bdd6900d28fe, entries=1, sequenceid=31, filesize=4.9 K 2023-07-16 19:15:38,164 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/.tmp/table/c35f20f1f0564ec38b0fd5e5e53e21cf as hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/table/c35f20f1f0564ec38b0fd5e5e53e21cf 2023-07-16 19:15:38,170 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c35f20f1f0564ec38b0fd5e5e53e21cf 2023-07-16 19:15:38,170 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/table/c35f20f1f0564ec38b0fd5e5e53e21cf, entries=8, sequenceid=31, filesize=5.2 K 2023-07-16 19:15:38,172 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 145ms, sequenceid=31, compaction requested=false 2023-07-16 19:15:38,181 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-16 19:15:38,182 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 19:15:38,182 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 19:15:38,182 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 19:15:38,182 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-16 19:15:38,220 DEBUG [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(1504): Waiting on f6ee148af7855c86512358b2ddb1d812 2023-07-16 19:15:38,223 INFO [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34171,1689534935345; all regions closed. 2023-07-16 19:15:38,223 DEBUG [RS:2;jenkins-hbase4:34171] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-16 19:15:38,231 DEBUG [RS:2;jenkins-hbase4:34171] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/oldWALs 2023-07-16 19:15:38,232 INFO [RS:2;jenkins-hbase4:34171] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34171%2C1689534935345.meta:.meta(num 1689534936141) 2023-07-16 19:15:38,238 DEBUG [RS:2;jenkins-hbase4:34171] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/oldWALs 2023-07-16 19:15:38,238 INFO [RS:2;jenkins-hbase4:34171] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34171%2C1689534935345:(num 1689534935879) 2023-07-16 19:15:38,238 DEBUG [RS:2;jenkins-hbase4:34171] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:38,238 INFO [RS:2;jenkins-hbase4:34171] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:38,238 INFO [RS:2;jenkins-hbase4:34171] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 19:15:38,238 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 19:15:38,239 INFO [RS:2;jenkins-hbase4:34171] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34171 2023-07-16 19:15:38,243 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:34171-0x1016f8fb3430003, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34171,1689534935345 2023-07-16 19:15:38,243 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34171,1689534935345 2023-07-16 19:15:38,243 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:38,245 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34171,1689534935345] 2023-07-16 19:15:38,245 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34171,1689534935345; numProcessing=2 2023-07-16 19:15:38,247 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34171,1689534935345 already deleted, retry=false 2023-07-16 19:15:38,247 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34171,1689534935345 expired; onlineServers=1 2023-07-16 19:15:38,299 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:38201-0x1016f8fb3430001, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:38,299 INFO [RS:0;jenkins-hbase4:38201] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38201,1689534935205; zookeeper connection closed. 2023-07-16 19:15:38,299 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:38201-0x1016f8fb3430001, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:38,300 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@117484e5] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@117484e5 2023-07-16 19:15:38,421 DEBUG [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(1504): Waiting on f6ee148af7855c86512358b2ddb1d812 2023-07-16 19:15:38,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/rsgroup/f6ee148af7855c86512358b2ddb1d812/.tmp/m/6d843e6ae74a45baa21edee421a4d624 2023-07-16 19:15:38,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/rsgroup/f6ee148af7855c86512358b2ddb1d812/.tmp/m/6d843e6ae74a45baa21edee421a4d624 as hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/rsgroup/f6ee148af7855c86512358b2ddb1d812/m/6d843e6ae74a45baa21edee421a4d624 2023-07-16 19:15:38,555 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/rsgroup/f6ee148af7855c86512358b2ddb1d812/m/6d843e6ae74a45baa21edee421a4d624, entries=1, sequenceid=7, filesize=4.9 K 2023-07-16 19:15:38,556 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for f6ee148af7855c86512358b2ddb1d812 in 454ms, sequenceid=7, compaction requested=false 2023-07-16 19:15:38,562 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/data/hbase/rsgroup/f6ee148af7855c86512358b2ddb1d812/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-16 19:15:38,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 19:15:38,563 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812. 2023-07-16 19:15:38,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f6ee148af7855c86512358b2ddb1d812: 2023-07-16 19:15:38,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689534936208.f6ee148af7855c86512358b2ddb1d812. 2023-07-16 19:15:38,600 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:34171-0x1016f8fb3430003, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:38,600 INFO [RS:2;jenkins-hbase4:34171] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34171,1689534935345; zookeeper connection closed. 2023-07-16 19:15:38,600 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:34171-0x1016f8fb3430003, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:38,602 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@40a7ecb1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@40a7ecb1 2023-07-16 19:15:38,621 INFO [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41819,1689534935274; all regions closed. 2023-07-16 19:15:38,621 DEBUG [RS:1;jenkins-hbase4:41819] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-16 19:15:38,627 DEBUG [RS:1;jenkins-hbase4:41819] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/oldWALs 2023-07-16 19:15:38,627 INFO [RS:1;jenkins-hbase4:41819] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41819%2C1689534935274:(num 1689534935892) 2023-07-16 19:15:38,627 DEBUG [RS:1;jenkins-hbase4:41819] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:38,627 INFO [RS:1;jenkins-hbase4:41819] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:38,627 INFO [RS:1;jenkins-hbase4:41819] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 19:15:38,627 INFO [RS:1;jenkins-hbase4:41819] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 19:15:38,627 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 19:15:38,627 INFO [RS:1;jenkins-hbase4:41819] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 19:15:38,627 INFO [RS:1;jenkins-hbase4:41819] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 19:15:38,628 INFO [RS:1;jenkins-hbase4:41819] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41819 2023-07-16 19:15:38,632 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:38,632 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41819,1689534935274 2023-07-16 19:15:38,632 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41819,1689534935274] 2023-07-16 19:15:38,632 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41819,1689534935274; numProcessing=3 2023-07-16 19:15:38,635 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41819,1689534935274 already deleted, retry=false 2023-07-16 19:15:38,635 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41819,1689534935274 expired; onlineServers=0 2023-07-16 19:15:38,635 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41265,1689534935134' ***** 2023-07-16 19:15:38,636 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-16 19:15:38,637 DEBUG [M:0;jenkins-hbase4:41265] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@15af93d0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 19:15:38,637 INFO [M:0;jenkins-hbase4:41265] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 19:15:38,638 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-16 19:15:38,638 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:38,639 INFO [M:0;jenkins-hbase4:41265] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4920ba9{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-16 19:15:38,639 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 19:15:38,639 INFO [M:0;jenkins-hbase4:41265] server.AbstractConnector(383): Stopped ServerConnector@414299ab{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:38,639 INFO [M:0;jenkins-hbase4:41265] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 19:15:38,639 INFO [M:0;jenkins-hbase4:41265] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4879aa59{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 19:15:38,639 INFO [M:0;jenkins-hbase4:41265] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@56f748b4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/hadoop.log.dir/,STOPPED} 2023-07-16 19:15:38,640 INFO [M:0;jenkins-hbase4:41265] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41265,1689534935134 2023-07-16 19:15:38,640 INFO [M:0;jenkins-hbase4:41265] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41265,1689534935134; all regions closed. 2023-07-16 19:15:38,640 DEBUG [M:0;jenkins-hbase4:41265] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:38,640 INFO [M:0;jenkins-hbase4:41265] master.HMaster(1491): Stopping master jetty server 2023-07-16 19:15:38,640 INFO [M:0;jenkins-hbase4:41265] server.AbstractConnector(383): Stopped ServerConnector@614b4eae{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:38,640 DEBUG [M:0;jenkins-hbase4:41265] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-16 19:15:38,641 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-16 19:15:38,641 DEBUG [M:0;jenkins-hbase4:41265] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-16 19:15:38,641 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689534935635] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689534935635,5,FailOnTimeoutGroup] 2023-07-16 19:15:38,641 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689534935636] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689534935636,5,FailOnTimeoutGroup] 2023-07-16 19:15:38,641 INFO [M:0;jenkins-hbase4:41265] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-16 19:15:38,642 INFO [M:0;jenkins-hbase4:41265] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-16 19:15:38,642 INFO [M:0;jenkins-hbase4:41265] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 19:15:38,642 DEBUG [M:0;jenkins-hbase4:41265] master.HMaster(1512): Stopping service threads 2023-07-16 19:15:38,642 INFO [M:0;jenkins-hbase4:41265] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-16 19:15:38,643 ERROR [M:0;jenkins-hbase4:41265] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-16 19:15:38,643 INFO [M:0;jenkins-hbase4:41265] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-16 19:15:38,643 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-16 19:15:38,643 DEBUG [M:0;jenkins-hbase4:41265] zookeeper.ZKUtil(398): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-16 19:15:38,643 WARN [M:0;jenkins-hbase4:41265] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-16 19:15:38,643 INFO [M:0;jenkins-hbase4:41265] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-16 19:15:38,644 INFO [M:0;jenkins-hbase4:41265] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-16 19:15:38,644 DEBUG [M:0;jenkins-hbase4:41265] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 19:15:38,644 INFO [M:0;jenkins-hbase4:41265] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:38,644 DEBUG [M:0;jenkins-hbase4:41265] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:38,644 DEBUG [M:0;jenkins-hbase4:41265] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 19:15:38,644 DEBUG [M:0;jenkins-hbase4:41265] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:38,644 INFO [M:0;jenkins-hbase4:41265] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.98 KB heapSize=109.13 KB 2023-07-16 19:15:38,658 INFO [M:0;jenkins-hbase4:41265] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.98 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/238e2e302ab944a0ac002e078469f12a 2023-07-16 19:15:38,663 DEBUG [M:0;jenkins-hbase4:41265] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/238e2e302ab944a0ac002e078469f12a as hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/238e2e302ab944a0ac002e078469f12a 2023-07-16 19:15:38,670 INFO [M:0;jenkins-hbase4:41265] regionserver.HStore(1080): Added hdfs://localhost:43643/user/jenkins/test-data/b17d1de9-f1ae-fcb6-4296-e6d8bf6f6f3d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/238e2e302ab944a0ac002e078469f12a, entries=24, sequenceid=194, filesize=12.4 K 2023-07-16 19:15:38,671 INFO [M:0;jenkins-hbase4:41265] regionserver.HRegion(2948): Finished flush of dataSize ~92.98 KB/95214, heapSize ~109.11 KB/111728, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=194, compaction requested=false 2023-07-16 19:15:38,673 INFO [M:0;jenkins-hbase4:41265] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:38,673 DEBUG [M:0;jenkins-hbase4:41265] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 19:15:38,677 INFO [M:0;jenkins-hbase4:41265] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-16 19:15:38,677 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 19:15:38,677 INFO [M:0;jenkins-hbase4:41265] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41265 2023-07-16 19:15:38,679 DEBUG [M:0;jenkins-hbase4:41265] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,41265,1689534935134 already deleted, retry=false 2023-07-16 19:15:38,733 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:38,733 INFO [RS:1;jenkins-hbase4:41819] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41819,1689534935274; zookeeper connection closed. 2023-07-16 19:15:38,733 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): regionserver:41819-0x1016f8fb3430002, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:38,734 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@26658271] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@26658271 2023-07-16 19:15:38,734 INFO [Listener at localhost/36007] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-16 19:15:38,833 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:38,833 INFO [M:0;jenkins-hbase4:41265] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41265,1689534935134; zookeeper connection closed. 2023-07-16 19:15:38,833 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): master:41265-0x1016f8fb3430000, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:38,834 WARN [Listener at localhost/36007] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 19:15:38,838 INFO [Listener at localhost/36007] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 19:15:38,942 WARN [BP-1579636532-172.31.14.131-1689534934206 heartbeating to localhost/127.0.0.1:43643] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 19:15:38,944 WARN [BP-1579636532-172.31.14.131-1689534934206 heartbeating to localhost/127.0.0.1:43643] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1579636532-172.31.14.131-1689534934206 (Datanode Uuid 675ece7b-01be-4254-aab4-0f9c149e35a0) service to localhost/127.0.0.1:43643 2023-07-16 19:15:38,945 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/cluster_f0f6e5f9-c632-d1db-bd84-86850d28e4b3/dfs/data/data5/current/BP-1579636532-172.31.14.131-1689534934206] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:38,945 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/cluster_f0f6e5f9-c632-d1db-bd84-86850d28e4b3/dfs/data/data6/current/BP-1579636532-172.31.14.131-1689534934206] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:38,947 WARN [Listener at localhost/36007] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 19:15:38,951 INFO [Listener at localhost/36007] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 19:15:39,053 WARN [BP-1579636532-172.31.14.131-1689534934206 heartbeating to localhost/127.0.0.1:43643] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 19:15:39,053 WARN [BP-1579636532-172.31.14.131-1689534934206 heartbeating to localhost/127.0.0.1:43643] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1579636532-172.31.14.131-1689534934206 (Datanode Uuid acd51154-c4c7-46ec-ba4c-cfd639a6611e) service to localhost/127.0.0.1:43643 2023-07-16 19:15:39,054 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/cluster_f0f6e5f9-c632-d1db-bd84-86850d28e4b3/dfs/data/data3/current/BP-1579636532-172.31.14.131-1689534934206] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:39,054 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/cluster_f0f6e5f9-c632-d1db-bd84-86850d28e4b3/dfs/data/data4/current/BP-1579636532-172.31.14.131-1689534934206] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:39,055 WARN [Listener at localhost/36007] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 19:15:39,058 INFO [Listener at localhost/36007] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 19:15:39,161 WARN [BP-1579636532-172.31.14.131-1689534934206 heartbeating to localhost/127.0.0.1:43643] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 19:15:39,161 WARN [BP-1579636532-172.31.14.131-1689534934206 heartbeating to localhost/127.0.0.1:43643] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1579636532-172.31.14.131-1689534934206 (Datanode Uuid 139ac45a-2a52-49aa-9bba-13f32dde85b5) service to localhost/127.0.0.1:43643 2023-07-16 19:15:39,162 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/cluster_f0f6e5f9-c632-d1db-bd84-86850d28e4b3/dfs/data/data1/current/BP-1579636532-172.31.14.131-1689534934206] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:39,162 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/cluster_f0f6e5f9-c632-d1db-bd84-86850d28e4b3/dfs/data/data2/current/BP-1579636532-172.31.14.131-1689534934206] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:39,171 INFO [Listener at localhost/36007] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 19:15:39,297 INFO [Listener at localhost/36007] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-16 19:15:39,336 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-16 19:15:39,336 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-16 19:15:39,336 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/hadoop.log.dir so I do NOT create it in target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834 2023-07-16 19:15:39,336 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6f39fc17-98ca-7bc7-2311-e8bda766e5ef/hadoop.tmp.dir so I do NOT create it in target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834 2023-07-16 19:15:39,336 INFO [Listener at localhost/36007] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a, deleteOnExit=true 2023-07-16 19:15:39,336 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-16 19:15:39,337 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/test.cache.data in system properties and HBase conf 2023-07-16 19:15:39,337 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/hadoop.tmp.dir in system properties and HBase conf 2023-07-16 19:15:39,337 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/hadoop.log.dir in system properties and HBase conf 2023-07-16 19:15:39,337 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-16 19:15:39,337 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-16 19:15:39,338 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-16 19:15:39,339 DEBUG [Listener at localhost/36007] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-16 19:15:39,339 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-16 19:15:39,340 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-16 19:15:39,340 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-16 19:15:39,340 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 19:15:39,341 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-16 19:15:39,341 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-16 19:15:39,341 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 19:15:39,341 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 19:15:39,341 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-16 19:15:39,341 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/nfs.dump.dir in system properties and HBase conf 2023-07-16 19:15:39,342 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/java.io.tmpdir in system properties and HBase conf 2023-07-16 19:15:39,342 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 19:15:39,342 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-16 19:15:39,342 INFO [Listener at localhost/36007] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-16 19:15:39,347 WARN [Listener at localhost/36007] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 19:15:39,347 WARN [Listener at localhost/36007] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 19:15:39,392 DEBUG [Listener at localhost/36007-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1016f8fb343000a, quorum=127.0.0.1:56571, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-16 19:15:39,392 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1016f8fb343000a, quorum=127.0.0.1:56571, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-16 19:15:39,403 WARN [Listener at localhost/36007] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 19:15:39,405 INFO [Listener at localhost/36007] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 19:15:39,415 INFO [Listener at localhost/36007] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/java.io.tmpdir/Jetty_localhost_42035_hdfs____vobx8e/webapp 2023-07-16 19:15:39,522 INFO [Listener at localhost/36007] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42035 2023-07-16 19:15:39,526 WARN [Listener at localhost/36007] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 19:15:39,526 WARN [Listener at localhost/36007] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 19:15:39,568 WARN [Listener at localhost/36299] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 19:15:39,620 WARN [Listener at localhost/36299] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 19:15:39,622 WARN [Listener at localhost/36299] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 19:15:39,624 INFO [Listener at localhost/36299] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 19:15:39,632 INFO [Listener at localhost/36299] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/java.io.tmpdir/Jetty_localhost_40263_datanode____.1pw3l2/webapp 2023-07-16 19:15:39,736 INFO [Listener at localhost/36299] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40263 2023-07-16 19:15:39,744 WARN [Listener at localhost/33491] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 19:15:39,781 WARN [Listener at localhost/33491] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 19:15:39,784 WARN [Listener at localhost/33491] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 19:15:39,785 INFO [Listener at localhost/33491] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 19:15:39,789 INFO [Listener at localhost/33491] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/java.io.tmpdir/Jetty_localhost_46417_datanode____jvppxb/webapp 2023-07-16 19:15:39,896 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x64280f2aa3a86f40: Processing first storage report for DS-503ebb63-77f1-4b62-82ab-43e16f8d85a0 from datanode 93834aaa-932d-4a3e-ac81-4ade5aaa7064 2023-07-16 19:15:39,896 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x64280f2aa3a86f40: from storage DS-503ebb63-77f1-4b62-82ab-43e16f8d85a0 node DatanodeRegistration(127.0.0.1:35497, datanodeUuid=93834aaa-932d-4a3e-ac81-4ade5aaa7064, infoPort=37469, infoSecurePort=0, ipcPort=33491, storageInfo=lv=-57;cid=testClusterID;nsid=612623783;c=1689534939350), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 19:15:39,896 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x64280f2aa3a86f40: Processing first storage report for DS-2d86a615-f212-4c0e-b273-962de506f94b from datanode 93834aaa-932d-4a3e-ac81-4ade5aaa7064 2023-07-16 19:15:39,896 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x64280f2aa3a86f40: from storage DS-2d86a615-f212-4c0e-b273-962de506f94b node DatanodeRegistration(127.0.0.1:35497, datanodeUuid=93834aaa-932d-4a3e-ac81-4ade5aaa7064, infoPort=37469, infoSecurePort=0, ipcPort=33491, storageInfo=lv=-57;cid=testClusterID;nsid=612623783;c=1689534939350), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 19:15:39,905 INFO [Listener at localhost/33491] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46417 2023-07-16 19:15:39,913 WARN [Listener at localhost/46619] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 19:15:39,926 WARN [Listener at localhost/46619] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 19:15:39,928 WARN [Listener at localhost/46619] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 19:15:39,929 INFO [Listener at localhost/46619] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 19:15:39,933 INFO [Listener at localhost/46619] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/java.io.tmpdir/Jetty_localhost_46431_datanode____w9tu47/webapp 2023-07-16 19:15:39,999 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x747956369c0d72db: Processing first storage report for DS-660a3e9a-7b82-4408-a9f8-afc6895684e3 from datanode 15e992a8-d227-450b-89fe-697c5b720150 2023-07-16 19:15:40,000 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x747956369c0d72db: from storage DS-660a3e9a-7b82-4408-a9f8-afc6895684e3 node DatanodeRegistration(127.0.0.1:39707, datanodeUuid=15e992a8-d227-450b-89fe-697c5b720150, infoPort=41585, infoSecurePort=0, ipcPort=46619, storageInfo=lv=-57;cid=testClusterID;nsid=612623783;c=1689534939350), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 19:15:40,000 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x747956369c0d72db: Processing first storage report for DS-f8030906-5088-4e90-9a81-af19c5765e85 from datanode 15e992a8-d227-450b-89fe-697c5b720150 2023-07-16 19:15:40,000 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x747956369c0d72db: from storage DS-f8030906-5088-4e90-9a81-af19c5765e85 node DatanodeRegistration(127.0.0.1:39707, datanodeUuid=15e992a8-d227-450b-89fe-697c5b720150, infoPort=41585, infoSecurePort=0, ipcPort=46619, storageInfo=lv=-57;cid=testClusterID;nsid=612623783;c=1689534939350), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 19:15:40,037 INFO [Listener at localhost/46619] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46431 2023-07-16 19:15:40,043 WARN [Listener at localhost/42605] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 19:15:40,141 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x98509b11ad8b77f3: Processing first storage report for DS-26cdaadd-d4c9-4440-9c42-8c17efeae96a from datanode cab3eae8-76a8-4096-9631-fc7d8996bcf7 2023-07-16 19:15:40,141 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x98509b11ad8b77f3: from storage DS-26cdaadd-d4c9-4440-9c42-8c17efeae96a node DatanodeRegistration(127.0.0.1:46173, datanodeUuid=cab3eae8-76a8-4096-9631-fc7d8996bcf7, infoPort=41759, infoSecurePort=0, ipcPort=42605, storageInfo=lv=-57;cid=testClusterID;nsid=612623783;c=1689534939350), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 19:15:40,141 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x98509b11ad8b77f3: Processing first storage report for DS-92d1a1a4-897d-4e7c-8cca-fe23c99a9b41 from datanode cab3eae8-76a8-4096-9631-fc7d8996bcf7 2023-07-16 19:15:40,141 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x98509b11ad8b77f3: from storage DS-92d1a1a4-897d-4e7c-8cca-fe23c99a9b41 node DatanodeRegistration(127.0.0.1:46173, datanodeUuid=cab3eae8-76a8-4096-9631-fc7d8996bcf7, infoPort=41759, infoSecurePort=0, ipcPort=42605, storageInfo=lv=-57;cid=testClusterID;nsid=612623783;c=1689534939350), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 19:15:40,149 DEBUG [Listener at localhost/42605] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834 2023-07-16 19:15:40,151 INFO [Listener at localhost/42605] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/zookeeper_0, clientPort=62260, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-16 19:15:40,153 INFO [Listener at localhost/42605] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62260 2023-07-16 19:15:40,153 INFO [Listener at localhost/42605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:40,154 INFO [Listener at localhost/42605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:40,168 INFO [Listener at localhost/42605] util.FSUtils(471): Created version file at hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991 with version=8 2023-07-16 19:15:40,168 INFO [Listener at localhost/42605] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:34211/user/jenkins/test-data/06f3c45f-4d06-7b8e-8dc4-1d8d8d8a8049/hbase-staging 2023-07-16 19:15:40,169 DEBUG [Listener at localhost/42605] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-16 19:15:40,169 DEBUG [Listener at localhost/42605] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-16 19:15:40,169 DEBUG [Listener at localhost/42605] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-16 19:15:40,169 DEBUG [Listener at localhost/42605] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-16 19:15:40,170 INFO [Listener at localhost/42605] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 19:15:40,170 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:40,170 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:40,171 INFO [Listener at localhost/42605] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 19:15:40,171 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:40,171 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 19:15:40,171 INFO [Listener at localhost/42605] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 19:15:40,171 INFO [Listener at localhost/42605] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45041 2023-07-16 19:15:40,172 INFO [Listener at localhost/42605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:40,173 INFO [Listener at localhost/42605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:40,173 INFO [Listener at localhost/42605] zookeeper.RecoverableZooKeeper(93): Process identifier=master:45041 connecting to ZooKeeper ensemble=127.0.0.1:62260 2023-07-16 19:15:40,180 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:450410x0, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:40,181 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:45041-0x1016f8fc6fa0000 connected 2023-07-16 19:15:40,199 DEBUG [Listener at localhost/42605] zookeeper.ZKUtil(164): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 19:15:40,199 DEBUG [Listener at localhost/42605] zookeeper.ZKUtil(164): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:40,199 DEBUG [Listener at localhost/42605] zookeeper.ZKUtil(164): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 19:15:40,202 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45041 2023-07-16 19:15:40,203 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45041 2023-07-16 19:15:40,203 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45041 2023-07-16 19:15:40,203 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45041 2023-07-16 19:15:40,203 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45041 2023-07-16 19:15:40,205 INFO [Listener at localhost/42605] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 19:15:40,205 INFO [Listener at localhost/42605] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 19:15:40,206 INFO [Listener at localhost/42605] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 19:15:40,206 INFO [Listener at localhost/42605] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-16 19:15:40,206 INFO [Listener at localhost/42605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 19:15:40,206 INFO [Listener at localhost/42605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 19:15:40,206 INFO [Listener at localhost/42605] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 19:15:40,207 INFO [Listener at localhost/42605] http.HttpServer(1146): Jetty bound to port 43263 2023-07-16 19:15:40,207 INFO [Listener at localhost/42605] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:40,213 INFO [Listener at localhost/42605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:40,213 INFO [Listener at localhost/42605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@633dccf4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/hadoop.log.dir/,AVAILABLE} 2023-07-16 19:15:40,214 INFO [Listener at localhost/42605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:40,214 INFO [Listener at localhost/42605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@69823956{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 19:15:40,220 INFO [Listener at localhost/42605] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 19:15:40,221 INFO [Listener at localhost/42605] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 19:15:40,221 INFO [Listener at localhost/42605] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 19:15:40,222 INFO [Listener at localhost/42605] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 19:15:40,223 INFO [Listener at localhost/42605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:40,224 INFO [Listener at localhost/42605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@105e13de{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-16 19:15:40,225 INFO [Listener at localhost/42605] server.AbstractConnector(333): Started ServerConnector@22366487{HTTP/1.1, (http/1.1)}{0.0.0.0:43263} 2023-07-16 19:15:40,225 INFO [Listener at localhost/42605] server.Server(415): Started @41941ms 2023-07-16 19:15:40,225 INFO [Listener at localhost/42605] master.HMaster(444): hbase.rootdir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991, hbase.cluster.distributed=false 2023-07-16 19:15:40,239 INFO [Listener at localhost/42605] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 19:15:40,239 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:40,239 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:40,239 INFO [Listener at localhost/42605] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 19:15:40,239 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:40,239 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 19:15:40,239 INFO [Listener at localhost/42605] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 19:15:40,240 INFO [Listener at localhost/42605] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39113 2023-07-16 19:15:40,240 INFO [Listener at localhost/42605] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 19:15:40,241 DEBUG [Listener at localhost/42605] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 19:15:40,242 INFO [Listener at localhost/42605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:40,243 INFO [Listener at localhost/42605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:40,244 INFO [Listener at localhost/42605] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39113 connecting to ZooKeeper ensemble=127.0.0.1:62260 2023-07-16 19:15:40,247 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:391130x0, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:40,249 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39113-0x1016f8fc6fa0001 connected 2023-07-16 19:15:40,249 DEBUG [Listener at localhost/42605] zookeeper.ZKUtil(164): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 19:15:40,249 DEBUG [Listener at localhost/42605] zookeeper.ZKUtil(164): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:40,250 DEBUG [Listener at localhost/42605] zookeeper.ZKUtil(164): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 19:15:40,250 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39113 2023-07-16 19:15:40,250 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39113 2023-07-16 19:15:40,252 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39113 2023-07-16 19:15:40,253 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39113 2023-07-16 19:15:40,253 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39113 2023-07-16 19:15:40,255 INFO [Listener at localhost/42605] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 19:15:40,255 INFO [Listener at localhost/42605] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 19:15:40,256 INFO [Listener at localhost/42605] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 19:15:40,256 INFO [Listener at localhost/42605] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 19:15:40,256 INFO [Listener at localhost/42605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 19:15:40,256 INFO [Listener at localhost/42605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 19:15:40,257 INFO [Listener at localhost/42605] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 19:15:40,257 INFO [Listener at localhost/42605] http.HttpServer(1146): Jetty bound to port 38123 2023-07-16 19:15:40,257 INFO [Listener at localhost/42605] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:40,263 INFO [Listener at localhost/42605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:40,263 INFO [Listener at localhost/42605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@666a8c86{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/hadoop.log.dir/,AVAILABLE} 2023-07-16 19:15:40,263 INFO [Listener at localhost/42605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:40,264 INFO [Listener at localhost/42605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39bfefc6{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 19:15:40,270 INFO [Listener at localhost/42605] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 19:15:40,270 INFO [Listener at localhost/42605] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 19:15:40,270 INFO [Listener at localhost/42605] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 19:15:40,271 INFO [Listener at localhost/42605] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 19:15:40,274 INFO [Listener at localhost/42605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:40,275 INFO [Listener at localhost/42605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2ab355a6{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:40,276 INFO [Listener at localhost/42605] server.AbstractConnector(333): Started ServerConnector@30ce9388{HTTP/1.1, (http/1.1)}{0.0.0.0:38123} 2023-07-16 19:15:40,276 INFO [Listener at localhost/42605] server.Server(415): Started @41992ms 2023-07-16 19:15:40,289 INFO [Listener at localhost/42605] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 19:15:40,289 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:40,289 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:40,289 INFO [Listener at localhost/42605] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 19:15:40,289 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:40,289 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 19:15:40,289 INFO [Listener at localhost/42605] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 19:15:40,291 INFO [Listener at localhost/42605] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40351 2023-07-16 19:15:40,291 INFO [Listener at localhost/42605] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 19:15:40,292 DEBUG [Listener at localhost/42605] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 19:15:40,293 INFO [Listener at localhost/42605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:40,293 INFO [Listener at localhost/42605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:40,294 INFO [Listener at localhost/42605] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40351 connecting to ZooKeeper ensemble=127.0.0.1:62260 2023-07-16 19:15:40,298 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:403510x0, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:40,299 DEBUG [Listener at localhost/42605] zookeeper.ZKUtil(164): regionserver:403510x0, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 19:15:40,299 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40351-0x1016f8fc6fa0002 connected 2023-07-16 19:15:40,299 DEBUG [Listener at localhost/42605] zookeeper.ZKUtil(164): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:40,300 DEBUG [Listener at localhost/42605] zookeeper.ZKUtil(164): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 19:15:40,302 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40351 2023-07-16 19:15:40,304 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40351 2023-07-16 19:15:40,304 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40351 2023-07-16 19:15:40,305 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40351 2023-07-16 19:15:40,306 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40351 2023-07-16 19:15:40,308 INFO [Listener at localhost/42605] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 19:15:40,308 INFO [Listener at localhost/42605] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 19:15:40,309 INFO [Listener at localhost/42605] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 19:15:40,309 INFO [Listener at localhost/42605] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 19:15:40,309 INFO [Listener at localhost/42605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 19:15:40,310 INFO [Listener at localhost/42605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 19:15:40,310 INFO [Listener at localhost/42605] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 19:15:40,310 INFO [Listener at localhost/42605] http.HttpServer(1146): Jetty bound to port 46695 2023-07-16 19:15:40,311 INFO [Listener at localhost/42605] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:40,315 INFO [Listener at localhost/42605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:40,315 INFO [Listener at localhost/42605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@730a2ea8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/hadoop.log.dir/,AVAILABLE} 2023-07-16 19:15:40,316 INFO [Listener at localhost/42605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:40,316 INFO [Listener at localhost/42605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4ceb7e75{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 19:15:40,320 INFO [Listener at localhost/42605] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 19:15:40,321 INFO [Listener at localhost/42605] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 19:15:40,321 INFO [Listener at localhost/42605] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 19:15:40,321 INFO [Listener at localhost/42605] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 19:15:40,322 INFO [Listener at localhost/42605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:40,322 INFO [Listener at localhost/42605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@24fd7fc7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:40,325 INFO [Listener at localhost/42605] server.AbstractConnector(333): Started ServerConnector@66049cc4{HTTP/1.1, (http/1.1)}{0.0.0.0:46695} 2023-07-16 19:15:40,325 INFO [Listener at localhost/42605] server.Server(415): Started @42041ms 2023-07-16 19:15:40,339 INFO [Listener at localhost/42605] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 19:15:40,339 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:40,339 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:40,339 INFO [Listener at localhost/42605] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 19:15:40,339 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:40,339 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 19:15:40,340 INFO [Listener at localhost/42605] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 19:15:40,342 INFO [Listener at localhost/42605] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41397 2023-07-16 19:15:40,342 INFO [Listener at localhost/42605] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 19:15:40,350 DEBUG [Listener at localhost/42605] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 19:15:40,351 INFO [Listener at localhost/42605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:40,352 INFO [Listener at localhost/42605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:40,353 INFO [Listener at localhost/42605] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41397 connecting to ZooKeeper ensemble=127.0.0.1:62260 2023-07-16 19:15:40,374 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:413970x0, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:40,375 DEBUG [Listener at localhost/42605] zookeeper.ZKUtil(164): regionserver:413970x0, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 19:15:40,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41397-0x1016f8fc6fa0003 connected 2023-07-16 19:15:40,376 DEBUG [Listener at localhost/42605] zookeeper.ZKUtil(164): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:40,376 DEBUG [Listener at localhost/42605] zookeeper.ZKUtil(164): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 19:15:40,377 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41397 2023-07-16 19:15:40,377 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41397 2023-07-16 19:15:40,378 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41397 2023-07-16 19:15:40,384 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41397 2023-07-16 19:15:40,384 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41397 2023-07-16 19:15:40,385 INFO [Listener at localhost/42605] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 19:15:40,386 INFO [Listener at localhost/42605] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 19:15:40,386 INFO [Listener at localhost/42605] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 19:15:40,386 INFO [Listener at localhost/42605] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 19:15:40,386 INFO [Listener at localhost/42605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 19:15:40,386 INFO [Listener at localhost/42605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 19:15:40,387 INFO [Listener at localhost/42605] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 19:15:40,387 INFO [Listener at localhost/42605] http.HttpServer(1146): Jetty bound to port 44061 2023-07-16 19:15:40,387 INFO [Listener at localhost/42605] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:40,391 INFO [Listener at localhost/42605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:40,391 INFO [Listener at localhost/42605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@12ca8bea{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/hadoop.log.dir/,AVAILABLE} 2023-07-16 19:15:40,391 INFO [Listener at localhost/42605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:40,392 INFO [Listener at localhost/42605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@15be34c0{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 19:15:40,396 INFO [Listener at localhost/42605] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 19:15:40,397 INFO [Listener at localhost/42605] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 19:15:40,397 INFO [Listener at localhost/42605] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 19:15:40,397 INFO [Listener at localhost/42605] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 19:15:40,399 INFO [Listener at localhost/42605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:40,400 INFO [Listener at localhost/42605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@164f60c8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:40,401 INFO [Listener at localhost/42605] server.AbstractConnector(333): Started ServerConnector@b2fe258{HTTP/1.1, (http/1.1)}{0.0.0.0:44061} 2023-07-16 19:15:40,402 INFO [Listener at localhost/42605] server.Server(415): Started @42117ms 2023-07-16 19:15:40,404 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:40,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@5409c29{HTTP/1.1, (http/1.1)}{0.0.0.0:46757} 2023-07-16 19:15:40,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @42125ms 2023-07-16 19:15:40,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,45041,1689534940170 2023-07-16 19:15:40,416 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 19:15:40,417 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,45041,1689534940170 2023-07-16 19:15:40,418 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 19:15:40,418 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 19:15:40,418 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:40,418 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 19:15:40,418 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 19:15:40,420 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 19:15:40,422 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,45041,1689534940170 from backup master directory 2023-07-16 19:15:40,422 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 19:15:40,423 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,45041,1689534940170 2023-07-16 19:15:40,423 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 19:15:40,423 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 19:15:40,423 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,45041,1689534940170 2023-07-16 19:15:40,438 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-16 19:15:40,439 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/hbase.id with ID: a52829f4-01d8-4e67-bf87-4b7c0e4f9e30 2023-07-16 19:15:40,466 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:40,470 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:40,495 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3fe534ce to 127.0.0.1:62260 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:40,500 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@60b4ff83, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:40,500 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:40,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-16 19:15:40,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:40,503 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/MasterData/data/master/store-tmp 2023-07-16 19:15:40,549 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:40,551 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 19:15:40,551 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:40,551 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:40,551 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 19:15:40,551 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:40,553 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:40,553 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 19:15:40,553 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/MasterData/WALs/jenkins-hbase4.apache.org,45041,1689534940170 2023-07-16 19:15:40,561 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45041%2C1689534940170, suffix=, logDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/MasterData/WALs/jenkins-hbase4.apache.org,45041,1689534940170, archiveDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/MasterData/oldWALs, maxLogs=10 2023-07-16 19:15:40,579 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46173,DS-26cdaadd-d4c9-4440-9c42-8c17efeae96a,DISK] 2023-07-16 19:15:40,579 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39707,DS-660a3e9a-7b82-4408-a9f8-afc6895684e3,DISK] 2023-07-16 19:15:40,587 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35497,DS-503ebb63-77f1-4b62-82ab-43e16f8d85a0,DISK] 2023-07-16 19:15:40,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/MasterData/WALs/jenkins-hbase4.apache.org,45041,1689534940170/jenkins-hbase4.apache.org%2C45041%2C1689534940170.1689534940561 2023-07-16 19:15:40,590 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46173,DS-26cdaadd-d4c9-4440-9c42-8c17efeae96a,DISK], DatanodeInfoWithStorage[127.0.0.1:35497,DS-503ebb63-77f1-4b62-82ab-43e16f8d85a0,DISK], DatanodeInfoWithStorage[127.0.0.1:39707,DS-660a3e9a-7b82-4408-a9f8-afc6895684e3,DISK]] 2023-07-16 19:15:40,590 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:40,590 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:40,590 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:40,590 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:40,592 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:40,594 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-16 19:15:40,594 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-16 19:15:40,595 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:40,596 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:40,596 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:40,600 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 19:15:40,603 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:40,604 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11864479840, jitterRate=0.10496579110622406}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:40,604 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 19:15:40,604 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-16 19:15:40,606 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-16 19:15:40,606 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-16 19:15:40,606 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-16 19:15:40,606 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-16 19:15:40,607 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-16 19:15:40,607 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-16 19:15:40,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-16 19:15:40,611 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-16 19:15:40,612 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-16 19:15:40,612 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-16 19:15:40,612 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-16 19:15:40,614 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:40,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-16 19:15:40,615 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-16 19:15:40,616 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-16 19:15:40,618 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:40,618 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:40,618 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:40,618 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:40,618 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:40,618 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,45041,1689534940170, sessionid=0x1016f8fc6fa0000, setting cluster-up flag (Was=false) 2023-07-16 19:15:40,624 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:40,629 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-16 19:15:40,629 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45041,1689534940170 2023-07-16 19:15:40,633 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:40,638 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-16 19:15:40,639 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45041,1689534940170 2023-07-16 19:15:40,639 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.hbase-snapshot/.tmp 2023-07-16 19:15:40,640 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-16 19:15:40,640 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-16 19:15:40,641 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-16 19:15:40,641 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45041,1689534940170] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 19:15:40,641 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-16 19:15:40,642 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-16 19:15:40,657 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 19:15:40,658 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 19:15:40,658 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 19:15:40,658 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 19:15:40,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 19:15:40,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 19:15:40,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 19:15:40,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 19:15:40,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-16 19:15:40,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 19:15:40,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689534970665 2023-07-16 19:15:40,665 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 19:15:40,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-16 19:15:40,665 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-16 19:15:40,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-16 19:15:40,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-16 19:15:40,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-16 19:15:40,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-16 19:15:40,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-16 19:15:40,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,666 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-16 19:15:40,666 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-16 19:15:40,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-16 19:15:40,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-16 19:15:40,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-16 19:15:40,667 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:40,667 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689534940667,5,FailOnTimeoutGroup] 2023-07-16 19:15:40,667 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689534940667,5,FailOnTimeoutGroup] 2023-07-16 19:15:40,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-16 19:15:40,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,687 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:40,688 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:40,688 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991 2023-07-16 19:15:40,705 INFO [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer(951): ClusterId : a52829f4-01d8-4e67-bf87-4b7c0e4f9e30 2023-07-16 19:15:40,705 DEBUG [RS:0;jenkins-hbase4:39113] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 19:15:40,705 INFO [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer(951): ClusterId : a52829f4-01d8-4e67-bf87-4b7c0e4f9e30 2023-07-16 19:15:40,705 DEBUG [RS:1;jenkins-hbase4:40351] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 19:15:40,710 DEBUG [RS:1;jenkins-hbase4:40351] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 19:15:40,710 DEBUG [RS:1;jenkins-hbase4:40351] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 19:15:40,710 INFO [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer(951): ClusterId : a52829f4-01d8-4e67-bf87-4b7c0e4f9e30 2023-07-16 19:15:40,710 DEBUG [RS:2;jenkins-hbase4:41397] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 19:15:40,710 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:40,711 DEBUG [RS:0;jenkins-hbase4:39113] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 19:15:40,711 DEBUG [RS:0;jenkins-hbase4:39113] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 19:15:40,712 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 19:15:40,713 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/info 2023-07-16 19:15:40,714 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 19:15:40,714 DEBUG [RS:2;jenkins-hbase4:41397] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 19:15:40,714 DEBUG [RS:1;jenkins-hbase4:40351] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 19:15:40,714 DEBUG [RS:2;jenkins-hbase4:41397] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 19:15:40,721 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:40,721 DEBUG [RS:1;jenkins-hbase4:40351] zookeeper.ReadOnlyZKClient(139): Connect 0x25c91e9a to 127.0.0.1:62260 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:40,721 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 19:15:40,722 DEBUG [RS:0;jenkins-hbase4:39113] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 19:15:40,722 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/rep_barrier 2023-07-16 19:15:40,724 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 19:15:40,724 DEBUG [RS:2;jenkins-hbase4:41397] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 19:15:40,724 DEBUG [RS:0;jenkins-hbase4:39113] zookeeper.ReadOnlyZKClient(139): Connect 0x0b8740e4 to 127.0.0.1:62260 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:40,725 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:40,725 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 19:15:40,726 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/table 2023-07-16 19:15:40,726 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 19:15:40,726 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:40,731 DEBUG [RS:2;jenkins-hbase4:41397] zookeeper.ReadOnlyZKClient(139): Connect 0x391b8bc6 to 127.0.0.1:62260 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:40,732 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740 2023-07-16 19:15:40,733 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740 2023-07-16 19:15:40,735 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 19:15:40,737 DEBUG [RS:1;jenkins-hbase4:40351] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2b0ee453, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:40,737 DEBUG [RS:1;jenkins-hbase4:40351] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f3175ea, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 19:15:40,738 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 19:15:40,738 DEBUG [RS:0;jenkins-hbase4:39113] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@fb8d54c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:40,739 DEBUG [RS:0;jenkins-hbase4:39113] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4aba4adf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 19:15:40,746 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:40,747 DEBUG [RS:1;jenkins-hbase4:40351] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:40351 2023-07-16 19:15:40,747 INFO [RS:1;jenkins-hbase4:40351] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 19:15:40,747 INFO [RS:1;jenkins-hbase4:40351] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 19:15:40,747 DEBUG [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 19:15:40,747 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11116567040, jitterRate=0.035310983657836914}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 19:15:40,747 DEBUG [RS:2;jenkins-hbase4:41397] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c191281, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:40,747 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 19:15:40,747 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 19:15:40,747 DEBUG [RS:2;jenkins-hbase4:41397] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@50d3bbe4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 19:15:40,747 DEBUG [RS:0;jenkins-hbase4:39113] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:39113 2023-07-16 19:15:40,747 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 19:15:40,747 INFO [RS:0;jenkins-hbase4:39113] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 19:15:40,747 INFO [RS:0;jenkins-hbase4:39113] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 19:15:40,747 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 19:15:40,747 INFO [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45041,1689534940170 with isa=jenkins-hbase4.apache.org/172.31.14.131:40351, startcode=1689534940288 2023-07-16 19:15:40,747 DEBUG [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 19:15:40,747 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 19:15:40,748 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 19:15:40,748 DEBUG [RS:1;jenkins-hbase4:40351] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 19:15:40,748 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 19:15:40,748 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 19:15:40,748 INFO [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45041,1689534940170 with isa=jenkins-hbase4.apache.org/172.31.14.131:39113, startcode=1689534940238 2023-07-16 19:15:40,748 DEBUG [RS:0;jenkins-hbase4:39113] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 19:15:40,749 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 19:15:40,750 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-16 19:15:40,750 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-16 19:15:40,756 DEBUG [RS:2;jenkins-hbase4:41397] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41397 2023-07-16 19:15:40,756 INFO [RS:2;jenkins-hbase4:41397] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 19:15:40,756 INFO [RS:2;jenkins-hbase4:41397] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 19:15:40,756 DEBUG [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 19:15:40,756 INFO [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45041,1689534940170 with isa=jenkins-hbase4.apache.org/172.31.14.131:41397, startcode=1689534940338 2023-07-16 19:15:40,757 DEBUG [RS:2;jenkins-hbase4:41397] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 19:15:40,759 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-16 19:15:40,762 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44879, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 19:15:40,763 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56939, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 19:15:40,764 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45041] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:40,767 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34247, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 19:15:40,767 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45041,1689534940170] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 19:15:40,767 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45041] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40351,1689534940288 2023-07-16 19:15:40,770 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45041,1689534940170] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-16 19:15:40,770 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45041,1689534940170] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 19:15:40,770 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45041,1689534940170] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-16 19:15:40,770 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45041] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:40,770 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-16 19:15:40,770 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45041,1689534940170] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 19:15:40,770 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45041,1689534940170] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-16 19:15:40,770 DEBUG [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991 2023-07-16 19:15:40,770 DEBUG [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36299 2023-07-16 19:15:40,770 DEBUG [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43263 2023-07-16 19:15:40,771 DEBUG [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991 2023-07-16 19:15:40,771 DEBUG [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36299 2023-07-16 19:15:40,771 DEBUG [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43263 2023-07-16 19:15:40,772 DEBUG [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991 2023-07-16 19:15:40,772 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:40,772 DEBUG [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36299 2023-07-16 19:15:40,772 DEBUG [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43263 2023-07-16 19:15:40,777 DEBUG [RS:0;jenkins-hbase4:39113] zookeeper.ZKUtil(162): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:40,777 WARN [RS:0;jenkins-hbase4:39113] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 19:15:40,778 INFO [RS:0;jenkins-hbase4:39113] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:40,778 DEBUG [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/WALs/jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:40,778 DEBUG [RS:1;jenkins-hbase4:40351] zookeeper.ZKUtil(162): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40351,1689534940288 2023-07-16 19:15:40,779 WARN [RS:1;jenkins-hbase4:40351] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 19:15:40,779 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41397,1689534940338] 2023-07-16 19:15:40,779 INFO [RS:1;jenkins-hbase4:40351] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:40,779 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39113,1689534940238] 2023-07-16 19:15:40,779 DEBUG [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/WALs/jenkins-hbase4.apache.org,40351,1689534940288 2023-07-16 19:15:40,780 DEBUG [RS:2;jenkins-hbase4:41397] zookeeper.ZKUtil(162): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:40,779 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40351,1689534940288] 2023-07-16 19:15:40,780 WARN [RS:2;jenkins-hbase4:41397] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 19:15:40,780 INFO [RS:2;jenkins-hbase4:41397] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:40,782 DEBUG [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/WALs/jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:40,783 DEBUG [RS:0;jenkins-hbase4:39113] zookeeper.ZKUtil(162): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:40,783 DEBUG [RS:0;jenkins-hbase4:39113] zookeeper.ZKUtil(162): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40351,1689534940288 2023-07-16 19:15:40,784 DEBUG [RS:0;jenkins-hbase4:39113] zookeeper.ZKUtil(162): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:40,784 DEBUG [RS:0;jenkins-hbase4:39113] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 19:15:40,784 INFO [RS:0;jenkins-hbase4:39113] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 19:15:40,788 DEBUG [RS:1;jenkins-hbase4:40351] zookeeper.ZKUtil(162): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:40,788 DEBUG [RS:1;jenkins-hbase4:40351] zookeeper.ZKUtil(162): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40351,1689534940288 2023-07-16 19:15:40,789 DEBUG [RS:1;jenkins-hbase4:40351] zookeeper.ZKUtil(162): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:40,789 DEBUG [RS:1;jenkins-hbase4:40351] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 19:15:40,789 INFO [RS:1;jenkins-hbase4:40351] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 19:15:40,793 INFO [RS:0;jenkins-hbase4:39113] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 19:15:40,799 INFO [RS:1;jenkins-hbase4:40351] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 19:15:40,803 INFO [RS:0;jenkins-hbase4:39113] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 19:15:40,803 INFO [RS:1;jenkins-hbase4:40351] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 19:15:40,803 INFO [RS:0;jenkins-hbase4:39113] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,803 INFO [RS:1;jenkins-hbase4:40351] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,805 INFO [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 19:15:40,805 INFO [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 19:15:40,807 INFO [RS:0;jenkins-hbase4:39113] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,807 INFO [RS:1;jenkins-hbase4:40351] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,807 DEBUG [RS:0;jenkins-hbase4:39113] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,807 DEBUG [RS:1;jenkins-hbase4:40351] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,807 DEBUG [RS:0;jenkins-hbase4:39113] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,807 DEBUG [RS:2;jenkins-hbase4:41397] zookeeper.ZKUtil(162): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:40,807 DEBUG [RS:1;jenkins-hbase4:40351] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,807 DEBUG [RS:0;jenkins-hbase4:39113] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,807 DEBUG [RS:0;jenkins-hbase4:39113] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,807 DEBUG [RS:1;jenkins-hbase4:40351] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,807 DEBUG [RS:0;jenkins-hbase4:39113] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,807 DEBUG [RS:1;jenkins-hbase4:40351] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,807 DEBUG [RS:0;jenkins-hbase4:39113] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 19:15:40,807 DEBUG [RS:2;jenkins-hbase4:41397] zookeeper.ZKUtil(162): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40351,1689534940288 2023-07-16 19:15:40,808 DEBUG [RS:0;jenkins-hbase4:39113] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,808 DEBUG [RS:0;jenkins-hbase4:39113] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,808 DEBUG [RS:0;jenkins-hbase4:39113] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,808 DEBUG [RS:0;jenkins-hbase4:39113] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,808 DEBUG [RS:1;jenkins-hbase4:40351] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,808 DEBUG [RS:2;jenkins-hbase4:41397] zookeeper.ZKUtil(162): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:40,808 DEBUG [RS:1;jenkins-hbase4:40351] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 19:15:40,808 DEBUG [RS:1;jenkins-hbase4:40351] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,808 DEBUG [RS:1;jenkins-hbase4:40351] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,809 DEBUG [RS:1;jenkins-hbase4:40351] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,809 DEBUG [RS:1;jenkins-hbase4:40351] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,809 DEBUG [RS:2;jenkins-hbase4:41397] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 19:15:40,809 INFO [RS:2;jenkins-hbase4:41397] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 19:15:40,810 INFO [RS:0;jenkins-hbase4:39113] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,811 INFO [RS:0;jenkins-hbase4:39113] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,811 INFO [RS:0;jenkins-hbase4:39113] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,818 INFO [RS:2;jenkins-hbase4:41397] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 19:15:40,825 INFO [RS:1;jenkins-hbase4:40351] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,825 INFO [RS:1;jenkins-hbase4:40351] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,825 INFO [RS:1;jenkins-hbase4:40351] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,826 INFO [RS:2;jenkins-hbase4:41397] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 19:15:40,827 INFO [RS:2;jenkins-hbase4:41397] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,830 INFO [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 19:15:40,836 INFO [RS:2;jenkins-hbase4:41397] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,837 DEBUG [RS:2;jenkins-hbase4:41397] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,837 DEBUG [RS:2;jenkins-hbase4:41397] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,837 DEBUG [RS:2;jenkins-hbase4:41397] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,837 DEBUG [RS:2;jenkins-hbase4:41397] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,838 DEBUG [RS:2;jenkins-hbase4:41397] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,838 DEBUG [RS:2;jenkins-hbase4:41397] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 19:15:40,838 DEBUG [RS:2;jenkins-hbase4:41397] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,838 DEBUG [RS:2;jenkins-hbase4:41397] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,838 DEBUG [RS:2;jenkins-hbase4:41397] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,838 DEBUG [RS:2;jenkins-hbase4:41397] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:40,839 INFO [RS:2;jenkins-hbase4:41397] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,839 INFO [RS:2;jenkins-hbase4:41397] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,839 INFO [RS:2;jenkins-hbase4:41397] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,841 INFO [RS:0;jenkins-hbase4:39113] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 19:15:40,842 INFO [RS:0;jenkins-hbase4:39113] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39113,1689534940238-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,852 INFO [RS:2;jenkins-hbase4:41397] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 19:15:40,852 INFO [RS:2;jenkins-hbase4:41397] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41397,1689534940338-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,852 INFO [RS:1;jenkins-hbase4:40351] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 19:15:40,852 INFO [RS:1;jenkins-hbase4:40351] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40351,1689534940288-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:40,858 INFO [RS:0;jenkins-hbase4:39113] regionserver.Replication(203): jenkins-hbase4.apache.org,39113,1689534940238 started 2023-07-16 19:15:40,859 INFO [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39113,1689534940238, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39113, sessionid=0x1016f8fc6fa0001 2023-07-16 19:15:40,859 DEBUG [RS:0;jenkins-hbase4:39113] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 19:15:40,859 DEBUG [RS:0;jenkins-hbase4:39113] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:40,859 DEBUG [RS:0;jenkins-hbase4:39113] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39113,1689534940238' 2023-07-16 19:15:40,859 DEBUG [RS:0;jenkins-hbase4:39113] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 19:15:40,859 DEBUG [RS:0;jenkins-hbase4:39113] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 19:15:40,860 DEBUG [RS:0;jenkins-hbase4:39113] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 19:15:40,860 DEBUG [RS:0;jenkins-hbase4:39113] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 19:15:40,860 DEBUG [RS:0;jenkins-hbase4:39113] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:40,860 DEBUG [RS:0;jenkins-hbase4:39113] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39113,1689534940238' 2023-07-16 19:15:40,860 DEBUG [RS:0;jenkins-hbase4:39113] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 19:15:40,860 DEBUG [RS:0;jenkins-hbase4:39113] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 19:15:40,861 DEBUG [RS:0;jenkins-hbase4:39113] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 19:15:40,861 INFO [RS:0;jenkins-hbase4:39113] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 19:15:40,861 INFO [RS:0;jenkins-hbase4:39113] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 19:15:40,870 INFO [RS:2;jenkins-hbase4:41397] regionserver.Replication(203): jenkins-hbase4.apache.org,41397,1689534940338 started 2023-07-16 19:15:40,870 INFO [RS:1;jenkins-hbase4:40351] regionserver.Replication(203): jenkins-hbase4.apache.org,40351,1689534940288 started 2023-07-16 19:15:40,871 INFO [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41397,1689534940338, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41397, sessionid=0x1016f8fc6fa0003 2023-07-16 19:15:40,871 INFO [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40351,1689534940288, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40351, sessionid=0x1016f8fc6fa0002 2023-07-16 19:15:40,871 DEBUG [RS:2;jenkins-hbase4:41397] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 19:15:40,871 DEBUG [RS:2;jenkins-hbase4:41397] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:40,871 DEBUG [RS:1;jenkins-hbase4:40351] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 19:15:40,871 DEBUG [RS:1;jenkins-hbase4:40351] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40351,1689534940288 2023-07-16 19:15:40,871 DEBUG [RS:1;jenkins-hbase4:40351] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40351,1689534940288' 2023-07-16 19:15:40,871 DEBUG [RS:1;jenkins-hbase4:40351] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 19:15:40,871 DEBUG [RS:2;jenkins-hbase4:41397] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41397,1689534940338' 2023-07-16 19:15:40,871 DEBUG [RS:2;jenkins-hbase4:41397] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 19:15:40,871 DEBUG [RS:2;jenkins-hbase4:41397] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 19:15:40,871 DEBUG [RS:1;jenkins-hbase4:40351] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 19:15:40,872 DEBUG [RS:2;jenkins-hbase4:41397] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 19:15:40,872 DEBUG [RS:2;jenkins-hbase4:41397] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 19:15:40,872 DEBUG [RS:1;jenkins-hbase4:40351] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 19:15:40,872 DEBUG [RS:1;jenkins-hbase4:40351] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 19:15:40,872 DEBUG [RS:1;jenkins-hbase4:40351] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40351,1689534940288 2023-07-16 19:15:40,872 DEBUG [RS:1;jenkins-hbase4:40351] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40351,1689534940288' 2023-07-16 19:15:40,872 DEBUG [RS:1;jenkins-hbase4:40351] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 19:15:40,872 DEBUG [RS:2;jenkins-hbase4:41397] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:40,872 DEBUG [RS:2;jenkins-hbase4:41397] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41397,1689534940338' 2023-07-16 19:15:40,872 DEBUG [RS:2;jenkins-hbase4:41397] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 19:15:40,872 DEBUG [RS:1;jenkins-hbase4:40351] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 19:15:40,872 DEBUG [RS:2;jenkins-hbase4:41397] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 19:15:40,872 DEBUG [RS:1;jenkins-hbase4:40351] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 19:15:40,873 INFO [RS:1;jenkins-hbase4:40351] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 19:15:40,873 INFO [RS:1;jenkins-hbase4:40351] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 19:15:40,873 DEBUG [RS:2;jenkins-hbase4:41397] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 19:15:40,873 INFO [RS:2;jenkins-hbase4:41397] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 19:15:40,873 INFO [RS:2;jenkins-hbase4:41397] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 19:15:40,920 DEBUG [jenkins-hbase4:45041] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-16 19:15:40,920 DEBUG [jenkins-hbase4:45041] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:40,921 DEBUG [jenkins-hbase4:45041] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:40,921 DEBUG [jenkins-hbase4:45041] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:40,921 DEBUG [jenkins-hbase4:45041] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:40,921 DEBUG [jenkins-hbase4:45041] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:40,922 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40351,1689534940288, state=OPENING 2023-07-16 19:15:40,924 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-16 19:15:40,925 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:40,925 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 19:15:40,925 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40351,1689534940288}] 2023-07-16 19:15:40,961 WARN [ReadOnlyZKClient-127.0.0.1:62260@0x3fe534ce] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-16 19:15:40,962 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45041,1689534940170] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 19:15:40,963 INFO [RS:0;jenkins-hbase4:39113] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39113%2C1689534940238, suffix=, logDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/WALs/jenkins-hbase4.apache.org,39113,1689534940238, archiveDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/oldWALs, maxLogs=32 2023-07-16 19:15:40,963 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48324, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 19:15:40,964 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40351] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:48324 deadline: 1689535000964, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,40351,1689534940288 2023-07-16 19:15:40,975 INFO [RS:2;jenkins-hbase4:41397] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41397%2C1689534940338, suffix=, logDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/WALs/jenkins-hbase4.apache.org,41397,1689534940338, archiveDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/oldWALs, maxLogs=32 2023-07-16 19:15:40,975 INFO [RS:1;jenkins-hbase4:40351] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40351%2C1689534940288, suffix=, logDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/WALs/jenkins-hbase4.apache.org,40351,1689534940288, archiveDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/oldWALs, maxLogs=32 2023-07-16 19:15:40,980 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39707,DS-660a3e9a-7b82-4408-a9f8-afc6895684e3,DISK] 2023-07-16 19:15:40,980 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35497,DS-503ebb63-77f1-4b62-82ab-43e16f8d85a0,DISK] 2023-07-16 19:15:40,980 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46173,DS-26cdaadd-d4c9-4440-9c42-8c17efeae96a,DISK] 2023-07-16 19:15:40,986 INFO [RS:0;jenkins-hbase4:39113] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/WALs/jenkins-hbase4.apache.org,39113,1689534940238/jenkins-hbase4.apache.org%2C39113%2C1689534940238.1689534940963 2023-07-16 19:15:40,986 DEBUG [RS:0;jenkins-hbase4:39113] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39707,DS-660a3e9a-7b82-4408-a9f8-afc6895684e3,DISK], DatanodeInfoWithStorage[127.0.0.1:46173,DS-26cdaadd-d4c9-4440-9c42-8c17efeae96a,DISK], DatanodeInfoWithStorage[127.0.0.1:35497,DS-503ebb63-77f1-4b62-82ab-43e16f8d85a0,DISK]] 2023-07-16 19:15:40,996 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46173,DS-26cdaadd-d4c9-4440-9c42-8c17efeae96a,DISK] 2023-07-16 19:15:40,996 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35497,DS-503ebb63-77f1-4b62-82ab-43e16f8d85a0,DISK] 2023-07-16 19:15:40,996 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39707,DS-660a3e9a-7b82-4408-a9f8-afc6895684e3,DISK] 2023-07-16 19:15:40,998 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39707,DS-660a3e9a-7b82-4408-a9f8-afc6895684e3,DISK] 2023-07-16 19:15:41,002 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46173,DS-26cdaadd-d4c9-4440-9c42-8c17efeae96a,DISK] 2023-07-16 19:15:41,002 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35497,DS-503ebb63-77f1-4b62-82ab-43e16f8d85a0,DISK] 2023-07-16 19:15:41,003 INFO [RS:2;jenkins-hbase4:41397] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/WALs/jenkins-hbase4.apache.org,41397,1689534940338/jenkins-hbase4.apache.org%2C41397%2C1689534940338.1689534940975 2023-07-16 19:15:41,004 DEBUG [RS:2;jenkins-hbase4:41397] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46173,DS-26cdaadd-d4c9-4440-9c42-8c17efeae96a,DISK], DatanodeInfoWithStorage[127.0.0.1:39707,DS-660a3e9a-7b82-4408-a9f8-afc6895684e3,DISK], DatanodeInfoWithStorage[127.0.0.1:35497,DS-503ebb63-77f1-4b62-82ab-43e16f8d85a0,DISK]] 2023-07-16 19:15:41,006 INFO [RS:1;jenkins-hbase4:40351] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/WALs/jenkins-hbase4.apache.org,40351,1689534940288/jenkins-hbase4.apache.org%2C40351%2C1689534940288.1689534940976 2023-07-16 19:15:41,006 DEBUG [RS:1;jenkins-hbase4:40351] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39707,DS-660a3e9a-7b82-4408-a9f8-afc6895684e3,DISK], DatanodeInfoWithStorage[127.0.0.1:46173,DS-26cdaadd-d4c9-4440-9c42-8c17efeae96a,DISK], DatanodeInfoWithStorage[127.0.0.1:35497,DS-503ebb63-77f1-4b62-82ab-43e16f8d85a0,DISK]] 2023-07-16 19:15:41,080 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40351,1689534940288 2023-07-16 19:15:41,082 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 19:15:41,084 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48334, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 19:15:41,087 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-16 19:15:41,088 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:41,089 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40351%2C1689534940288.meta, suffix=.meta, logDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/WALs/jenkins-hbase4.apache.org,40351,1689534940288, archiveDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/oldWALs, maxLogs=32 2023-07-16 19:15:41,103 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35497,DS-503ebb63-77f1-4b62-82ab-43e16f8d85a0,DISK] 2023-07-16 19:15:41,104 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39707,DS-660a3e9a-7b82-4408-a9f8-afc6895684e3,DISK] 2023-07-16 19:15:41,103 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46173,DS-26cdaadd-d4c9-4440-9c42-8c17efeae96a,DISK] 2023-07-16 19:15:41,106 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/WALs/jenkins-hbase4.apache.org,40351,1689534940288/jenkins-hbase4.apache.org%2C40351%2C1689534940288.meta.1689534941089.meta 2023-07-16 19:15:41,106 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46173,DS-26cdaadd-d4c9-4440-9c42-8c17efeae96a,DISK], DatanodeInfoWithStorage[127.0.0.1:35497,DS-503ebb63-77f1-4b62-82ab-43e16f8d85a0,DISK], DatanodeInfoWithStorage[127.0.0.1:39707,DS-660a3e9a-7b82-4408-a9f8-afc6895684e3,DISK]] 2023-07-16 19:15:41,106 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:41,107 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 19:15:41,107 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-16 19:15:41,107 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-16 19:15:41,107 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-16 19:15:41,107 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:41,107 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-16 19:15:41,107 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-16 19:15:41,108 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 19:15:41,109 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/info 2023-07-16 19:15:41,109 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/info 2023-07-16 19:15:41,110 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 19:15:41,110 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:41,110 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 19:15:41,111 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/rep_barrier 2023-07-16 19:15:41,111 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/rep_barrier 2023-07-16 19:15:41,112 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 19:15:41,112 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:41,112 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 19:15:41,113 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/table 2023-07-16 19:15:41,113 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/table 2023-07-16 19:15:41,113 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 19:15:41,114 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:41,114 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740 2023-07-16 19:15:41,115 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740 2023-07-16 19:15:41,117 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 19:15:41,118 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 19:15:41,119 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11358746560, jitterRate=0.05786570906639099}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 19:15:41,119 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 19:15:41,120 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689534941080 2023-07-16 19:15:41,124 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-16 19:15:41,125 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-16 19:15:41,125 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,40351,1689534940288, state=OPEN 2023-07-16 19:15:41,127 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 19:15:41,127 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 19:15:41,129 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-16 19:15:41,129 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,40351,1689534940288 in 202 msec 2023-07-16 19:15:41,130 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-16 19:15:41,130 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 379 msec 2023-07-16 19:15:41,131 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 488 msec 2023-07-16 19:15:41,131 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689534941131, completionTime=-1 2023-07-16 19:15:41,132 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-16 19:15:41,132 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-16 19:15:41,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-16 19:15:41,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689535001136 2023-07-16 19:15:41,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689535061136 2023-07-16 19:15:41,136 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-16 19:15:41,141 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45041,1689534940170-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:41,141 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45041,1689534940170-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:41,141 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45041,1689534940170-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:41,141 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:45041, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:41,141 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:41,141 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-16 19:15:41,141 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:41,142 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-16 19:15:41,142 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-16 19:15:41,143 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:41,144 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 19:15:41,145 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/hbase/namespace/18d5689b5e7f387028c52f78091d478c 2023-07-16 19:15:41,146 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/hbase/namespace/18d5689b5e7f387028c52f78091d478c empty. 2023-07-16 19:15:41,146 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/hbase/namespace/18d5689b5e7f387028c52f78091d478c 2023-07-16 19:15:41,146 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-16 19:15:41,160 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:41,161 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 18d5689b5e7f387028c52f78091d478c, NAME => 'hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp 2023-07-16 19:15:41,169 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:41,170 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 18d5689b5e7f387028c52f78091d478c, disabling compactions & flushes 2023-07-16 19:15:41,170 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c. 2023-07-16 19:15:41,170 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c. 2023-07-16 19:15:41,170 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c. after waiting 0 ms 2023-07-16 19:15:41,170 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c. 2023-07-16 19:15:41,170 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c. 2023-07-16 19:15:41,170 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 18d5689b5e7f387028c52f78091d478c: 2023-07-16 19:15:41,172 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 19:15:41,173 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689534941173"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534941173"}]},"ts":"1689534941173"} 2023-07-16 19:15:41,175 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 19:15:41,176 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 19:15:41,176 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534941176"}]},"ts":"1689534941176"} 2023-07-16 19:15:41,177 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-16 19:15:41,180 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:41,180 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:41,180 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:41,180 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:41,180 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:41,181 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=18d5689b5e7f387028c52f78091d478c, ASSIGN}] 2023-07-16 19:15:41,182 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=18d5689b5e7f387028c52f78091d478c, ASSIGN 2023-07-16 19:15:41,183 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=18d5689b5e7f387028c52f78091d478c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39113,1689534940238; forceNewPlan=false, retain=false 2023-07-16 19:15:41,266 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45041,1689534940170] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:41,267 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45041,1689534940170] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-16 19:15:41,269 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:41,270 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 19:15:41,271 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/hbase/rsgroup/24daec907e2a633f9c226ca8f9560ed7 2023-07-16 19:15:41,272 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/hbase/rsgroup/24daec907e2a633f9c226ca8f9560ed7 empty. 2023-07-16 19:15:41,272 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/hbase/rsgroup/24daec907e2a633f9c226ca8f9560ed7 2023-07-16 19:15:41,273 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-16 19:15:41,283 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:41,284 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 24daec907e2a633f9c226ca8f9560ed7, NAME => 'hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp 2023-07-16 19:15:41,292 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:41,292 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 24daec907e2a633f9c226ca8f9560ed7, disabling compactions & flushes 2023-07-16 19:15:41,292 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7. 2023-07-16 19:15:41,292 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7. 2023-07-16 19:15:41,292 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7. after waiting 0 ms 2023-07-16 19:15:41,292 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7. 2023-07-16 19:15:41,292 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7. 2023-07-16 19:15:41,292 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 24daec907e2a633f9c226ca8f9560ed7: 2023-07-16 19:15:41,295 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 19:15:41,295 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689534941295"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534941295"}]},"ts":"1689534941295"} 2023-07-16 19:15:41,297 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 19:15:41,297 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 19:15:41,297 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534941297"}]},"ts":"1689534941297"} 2023-07-16 19:15:41,298 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-16 19:15:41,301 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:41,301 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:41,301 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:41,301 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:41,301 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:41,302 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=24daec907e2a633f9c226ca8f9560ed7, ASSIGN}] 2023-07-16 19:15:41,302 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=24daec907e2a633f9c226ca8f9560ed7, ASSIGN 2023-07-16 19:15:41,303 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=24daec907e2a633f9c226ca8f9560ed7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41397,1689534940338; forceNewPlan=false, retain=false 2023-07-16 19:15:41,303 INFO [jenkins-hbase4:45041] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-16 19:15:41,305 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=18d5689b5e7f387028c52f78091d478c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:41,305 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689534941305"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534941305"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534941305"}]},"ts":"1689534941305"} 2023-07-16 19:15:41,306 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=24daec907e2a633f9c226ca8f9560ed7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:41,306 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689534941306"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534941306"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534941306"}]},"ts":"1689534941306"} 2023-07-16 19:15:41,306 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 18d5689b5e7f387028c52f78091d478c, server=jenkins-hbase4.apache.org,39113,1689534940238}] 2023-07-16 19:15:41,308 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 24daec907e2a633f9c226ca8f9560ed7, server=jenkins-hbase4.apache.org,41397,1689534940338}] 2023-07-16 19:15:41,460 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:41,460 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:41,460 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 19:15:41,461 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 19:15:41,462 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36752, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 19:15:41,462 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51316, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 19:15:41,466 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7. 2023-07-16 19:15:41,466 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c. 2023-07-16 19:15:41,466 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 24daec907e2a633f9c226ca8f9560ed7, NAME => 'hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:41,466 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 18d5689b5e7f387028c52f78091d478c, NAME => 'hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:41,466 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 19:15:41,467 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 18d5689b5e7f387028c52f78091d478c 2023-07-16 19:15:41,467 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7. service=MultiRowMutationService 2023-07-16 19:15:41,467 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:41,467 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-16 19:15:41,467 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 18d5689b5e7f387028c52f78091d478c 2023-07-16 19:15:41,467 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 24daec907e2a633f9c226ca8f9560ed7 2023-07-16 19:15:41,467 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 18d5689b5e7f387028c52f78091d478c 2023-07-16 19:15:41,467 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:41,467 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 24daec907e2a633f9c226ca8f9560ed7 2023-07-16 19:15:41,467 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 24daec907e2a633f9c226ca8f9560ed7 2023-07-16 19:15:41,468 INFO [StoreOpener-18d5689b5e7f387028c52f78091d478c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 18d5689b5e7f387028c52f78091d478c 2023-07-16 19:15:41,469 INFO [StoreOpener-24daec907e2a633f9c226ca8f9560ed7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 24daec907e2a633f9c226ca8f9560ed7 2023-07-16 19:15:41,470 DEBUG [StoreOpener-18d5689b5e7f387028c52f78091d478c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/namespace/18d5689b5e7f387028c52f78091d478c/info 2023-07-16 19:15:41,470 DEBUG [StoreOpener-18d5689b5e7f387028c52f78091d478c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/namespace/18d5689b5e7f387028c52f78091d478c/info 2023-07-16 19:15:41,470 DEBUG [StoreOpener-24daec907e2a633f9c226ca8f9560ed7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/rsgroup/24daec907e2a633f9c226ca8f9560ed7/m 2023-07-16 19:15:41,471 DEBUG [StoreOpener-24daec907e2a633f9c226ca8f9560ed7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/rsgroup/24daec907e2a633f9c226ca8f9560ed7/m 2023-07-16 19:15:41,471 INFO [StoreOpener-24daec907e2a633f9c226ca8f9560ed7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 24daec907e2a633f9c226ca8f9560ed7 columnFamilyName m 2023-07-16 19:15:41,471 INFO [StoreOpener-18d5689b5e7f387028c52f78091d478c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 18d5689b5e7f387028c52f78091d478c columnFamilyName info 2023-07-16 19:15:41,471 INFO [StoreOpener-24daec907e2a633f9c226ca8f9560ed7-1] regionserver.HStore(310): Store=24daec907e2a633f9c226ca8f9560ed7/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:41,471 INFO [StoreOpener-18d5689b5e7f387028c52f78091d478c-1] regionserver.HStore(310): Store=18d5689b5e7f387028c52f78091d478c/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:41,472 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/rsgroup/24daec907e2a633f9c226ca8f9560ed7 2023-07-16 19:15:41,472 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/namespace/18d5689b5e7f387028c52f78091d478c 2023-07-16 19:15:41,473 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/rsgroup/24daec907e2a633f9c226ca8f9560ed7 2023-07-16 19:15:41,473 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/namespace/18d5689b5e7f387028c52f78091d478c 2023-07-16 19:15:41,475 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 18d5689b5e7f387028c52f78091d478c 2023-07-16 19:15:41,476 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 24daec907e2a633f9c226ca8f9560ed7 2023-07-16 19:15:41,479 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/namespace/18d5689b5e7f387028c52f78091d478c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:41,480 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/rsgroup/24daec907e2a633f9c226ca8f9560ed7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:41,480 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 18d5689b5e7f387028c52f78091d478c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11812755840, jitterRate=0.10014861822128296}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:41,480 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 18d5689b5e7f387028c52f78091d478c: 2023-07-16 19:15:41,480 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 24daec907e2a633f9c226ca8f9560ed7; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3fe085c7, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:41,480 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 24daec907e2a633f9c226ca8f9560ed7: 2023-07-16 19:15:41,481 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c., pid=8, masterSystemTime=1689534941460 2023-07-16 19:15:41,483 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7., pid=9, masterSystemTime=1689534941460 2023-07-16 19:15:41,486 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c. 2023-07-16 19:15:41,487 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c. 2023-07-16 19:15:41,487 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=18d5689b5e7f387028c52f78091d478c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:41,487 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7. 2023-07-16 19:15:41,487 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689534941487"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534941487"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534941487"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534941487"}]},"ts":"1689534941487"} 2023-07-16 19:15:41,489 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=24daec907e2a633f9c226ca8f9560ed7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:41,489 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689534941488"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534941488"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534941488"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534941488"}]},"ts":"1689534941488"} 2023-07-16 19:15:41,488 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7. 2023-07-16 19:15:41,491 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-16 19:15:41,491 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 18d5689b5e7f387028c52f78091d478c, server=jenkins-hbase4.apache.org,39113,1689534940238 in 184 msec 2023-07-16 19:15:41,492 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-16 19:15:41,492 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 24daec907e2a633f9c226ca8f9560ed7, server=jenkins-hbase4.apache.org,41397,1689534940338 in 182 msec 2023-07-16 19:15:41,493 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-16 19:15:41,493 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=18d5689b5e7f387028c52f78091d478c, ASSIGN in 310 msec 2023-07-16 19:15:41,494 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 19:15:41,494 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534941494"}]},"ts":"1689534941494"} 2023-07-16 19:15:41,494 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-16 19:15:41,494 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=24daec907e2a633f9c226ca8f9560ed7, ASSIGN in 190 msec 2023-07-16 19:15:41,495 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 19:15:41,495 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534941495"}]},"ts":"1689534941495"} 2023-07-16 19:15:41,495 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-16 19:15:41,496 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-16 19:15:41,498 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 19:15:41,499 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 357 msec 2023-07-16 19:15:41,499 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 19:15:41,501 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 234 msec 2023-07-16 19:15:41,543 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-16 19:15:41,544 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-16 19:15:41,545 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:41,547 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 19:15:41,549 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36758, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 19:15:41,551 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-16 19:15:41,559 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 19:15:41,561 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 10 msec 2023-07-16 19:15:41,570 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45041,1689534940170] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 19:15:41,571 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51326, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 19:15:41,572 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45041,1689534940170] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-16 19:15:41,572 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45041,1689534940170] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-16 19:15:41,574 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-16 19:15:41,580 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:41,580 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45041,1689534940170] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:41,581 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45041,1689534940170] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 19:15:41,582 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45041,1689534940170] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-16 19:15:41,583 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 19:15:41,587 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 13 msec 2023-07-16 19:15:41,599 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-16 19:15:41,601 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-16 19:15:41,601 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.178sec 2023-07-16 19:15:41,602 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-16 19:15:41,602 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-16 19:15:41,602 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-16 19:15:41,602 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45041,1689534940170-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-16 19:15:41,602 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45041,1689534940170-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-16 19:15:41,602 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-16 19:15:41,609 DEBUG [Listener at localhost/42605] zookeeper.ReadOnlyZKClient(139): Connect 0x3720103e to 127.0.0.1:62260 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:41,615 DEBUG [Listener at localhost/42605] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@10a31e76, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:41,616 DEBUG [hconnection-0x7d87758a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 19:15:41,618 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48336, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 19:15:41,620 INFO [Listener at localhost/42605] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,45041,1689534940170 2023-07-16 19:15:41,620 INFO [Listener at localhost/42605] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:41,622 DEBUG [Listener at localhost/42605] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-16 19:15:41,624 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39610, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-16 19:15:41,627 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-16 19:15:41,627 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:41,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-16 19:15:41,628 DEBUG [Listener at localhost/42605] zookeeper.ReadOnlyZKClient(139): Connect 0x46a2470b to 127.0.0.1:62260 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:41,633 DEBUG [Listener at localhost/42605] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@cf5170, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:41,634 INFO [Listener at localhost/42605] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:62260 2023-07-16 19:15:41,637 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:41,639 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1016f8fc6fa000a connected 2023-07-16 19:15:41,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:41,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:41,644 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-16 19:15:41,663 INFO [Listener at localhost/42605] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 19:15:41,663 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:41,663 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:41,663 INFO [Listener at localhost/42605] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 19:15:41,663 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 19:15:41,663 INFO [Listener at localhost/42605] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 19:15:41,664 INFO [Listener at localhost/42605] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 19:15:41,664 INFO [Listener at localhost/42605] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44385 2023-07-16 19:15:41,665 INFO [Listener at localhost/42605] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 19:15:41,666 DEBUG [Listener at localhost/42605] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 19:15:41,667 INFO [Listener at localhost/42605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:41,668 INFO [Listener at localhost/42605] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 19:15:41,670 INFO [Listener at localhost/42605] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44385 connecting to ZooKeeper ensemble=127.0.0.1:62260 2023-07-16 19:15:41,674 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:443850x0, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 19:15:41,676 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44385-0x1016f8fc6fa000b connected 2023-07-16 19:15:41,676 DEBUG [Listener at localhost/42605] zookeeper.ZKUtil(162): regionserver:44385-0x1016f8fc6fa000b, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 19:15:41,677 DEBUG [Listener at localhost/42605] zookeeper.ZKUtil(162): regionserver:44385-0x1016f8fc6fa000b, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-16 19:15:41,677 DEBUG [Listener at localhost/42605] zookeeper.ZKUtil(164): regionserver:44385-0x1016f8fc6fa000b, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 19:15:41,678 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44385 2023-07-16 19:15:41,678 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44385 2023-07-16 19:15:41,679 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44385 2023-07-16 19:15:41,679 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44385 2023-07-16 19:15:41,679 DEBUG [Listener at localhost/42605] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44385 2023-07-16 19:15:41,681 INFO [Listener at localhost/42605] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 19:15:41,681 INFO [Listener at localhost/42605] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 19:15:41,681 INFO [Listener at localhost/42605] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 19:15:41,681 INFO [Listener at localhost/42605] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 19:15:41,681 INFO [Listener at localhost/42605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 19:15:41,682 INFO [Listener at localhost/42605] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 19:15:41,682 INFO [Listener at localhost/42605] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 19:15:41,682 INFO [Listener at localhost/42605] http.HttpServer(1146): Jetty bound to port 41463 2023-07-16 19:15:41,682 INFO [Listener at localhost/42605] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 19:15:41,685 INFO [Listener at localhost/42605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:41,685 INFO [Listener at localhost/42605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@478f422{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/hadoop.log.dir/,AVAILABLE} 2023-07-16 19:15:41,686 INFO [Listener at localhost/42605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:41,686 INFO [Listener at localhost/42605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5da1de58{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 19:15:41,691 INFO [Listener at localhost/42605] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 19:15:41,691 INFO [Listener at localhost/42605] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 19:15:41,692 INFO [Listener at localhost/42605] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 19:15:41,692 INFO [Listener at localhost/42605] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 19:15:41,693 INFO [Listener at localhost/42605] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 19:15:41,694 INFO [Listener at localhost/42605] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@a9db2d1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:41,696 INFO [Listener at localhost/42605] server.AbstractConnector(333): Started ServerConnector@24e74273{HTTP/1.1, (http/1.1)}{0.0.0.0:41463} 2023-07-16 19:15:41,696 INFO [Listener at localhost/42605] server.Server(415): Started @43411ms 2023-07-16 19:15:41,700 INFO [RS:3;jenkins-hbase4:44385] regionserver.HRegionServer(951): ClusterId : a52829f4-01d8-4e67-bf87-4b7c0e4f9e30 2023-07-16 19:15:41,700 DEBUG [RS:3;jenkins-hbase4:44385] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 19:15:41,702 DEBUG [RS:3;jenkins-hbase4:44385] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 19:15:41,702 DEBUG [RS:3;jenkins-hbase4:44385] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 19:15:41,704 DEBUG [RS:3;jenkins-hbase4:44385] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 19:15:41,706 DEBUG [RS:3;jenkins-hbase4:44385] zookeeper.ReadOnlyZKClient(139): Connect 0x28d57238 to 127.0.0.1:62260 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 19:15:41,711 DEBUG [RS:3;jenkins-hbase4:44385] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@26129952, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 19:15:41,711 DEBUG [RS:3;jenkins-hbase4:44385] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@222d1948, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 19:15:41,719 DEBUG [RS:3;jenkins-hbase4:44385] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:44385 2023-07-16 19:15:41,719 INFO [RS:3;jenkins-hbase4:44385] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 19:15:41,719 INFO [RS:3;jenkins-hbase4:44385] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 19:15:41,719 DEBUG [RS:3;jenkins-hbase4:44385] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 19:15:41,720 INFO [RS:3;jenkins-hbase4:44385] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45041,1689534940170 with isa=jenkins-hbase4.apache.org/172.31.14.131:44385, startcode=1689534941662 2023-07-16 19:15:41,720 DEBUG [RS:3;jenkins-hbase4:44385] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 19:15:41,722 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50035, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 19:15:41,723 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45041] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:41,723 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45041,1689534940170] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 19:15:41,723 DEBUG [RS:3;jenkins-hbase4:44385] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991 2023-07-16 19:15:41,723 DEBUG [RS:3;jenkins-hbase4:44385] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36299 2023-07-16 19:15:41,723 DEBUG [RS:3;jenkins-hbase4:44385] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43263 2023-07-16 19:15:41,728 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:41,728 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:41,728 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:41,728 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45041,1689534940170] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:41,728 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:41,731 DEBUG [RS:3;jenkins-hbase4:44385] zookeeper.ZKUtil(162): regionserver:44385-0x1016f8fc6fa000b, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:41,731 WARN [RS:3;jenkins-hbase4:44385] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 19:15:41,731 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44385,1689534941662] 2023-07-16 19:15:41,731 INFO [RS:3;jenkins-hbase4:44385] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 19:15:41,731 DEBUG [RS:3;jenkins-hbase4:44385] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/WALs/jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:41,731 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45041,1689534940170] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 19:15:41,731 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:41,731 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:41,731 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:41,735 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45041,1689534940170] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-16 19:15:41,735 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:41,735 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:41,735 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:41,736 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40351,1689534940288 2023-07-16 19:15:41,736 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40351,1689534940288 2023-07-16 19:15:41,736 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40351,1689534940288 2023-07-16 19:15:41,736 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:41,737 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:41,737 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:41,737 DEBUG [RS:3;jenkins-hbase4:44385] zookeeper.ZKUtil(162): regionserver:44385-0x1016f8fc6fa000b, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:41,738 DEBUG [RS:3;jenkins-hbase4:44385] zookeeper.ZKUtil(162): regionserver:44385-0x1016f8fc6fa000b, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:41,738 DEBUG [RS:3;jenkins-hbase4:44385] zookeeper.ZKUtil(162): regionserver:44385-0x1016f8fc6fa000b, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40351,1689534940288 2023-07-16 19:15:41,738 DEBUG [RS:3;jenkins-hbase4:44385] zookeeper.ZKUtil(162): regionserver:44385-0x1016f8fc6fa000b, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:41,739 DEBUG [RS:3;jenkins-hbase4:44385] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 19:15:41,739 INFO [RS:3;jenkins-hbase4:44385] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 19:15:41,740 INFO [RS:3;jenkins-hbase4:44385] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 19:15:41,740 INFO [RS:3;jenkins-hbase4:44385] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 19:15:41,741 INFO [RS:3;jenkins-hbase4:44385] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:41,741 INFO [RS:3;jenkins-hbase4:44385] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 19:15:41,742 INFO [RS:3;jenkins-hbase4:44385] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:41,742 DEBUG [RS:3;jenkins-hbase4:44385] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:41,742 DEBUG [RS:3;jenkins-hbase4:44385] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:41,742 DEBUG [RS:3;jenkins-hbase4:44385] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:41,742 DEBUG [RS:3;jenkins-hbase4:44385] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:41,742 DEBUG [RS:3;jenkins-hbase4:44385] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:41,742 DEBUG [RS:3;jenkins-hbase4:44385] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 19:15:41,742 DEBUG [RS:3;jenkins-hbase4:44385] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:41,742 DEBUG [RS:3;jenkins-hbase4:44385] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:41,742 DEBUG [RS:3;jenkins-hbase4:44385] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:41,742 DEBUG [RS:3;jenkins-hbase4:44385] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 19:15:41,746 INFO [RS:3;jenkins-hbase4:44385] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:41,746 INFO [RS:3;jenkins-hbase4:44385] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:41,747 INFO [RS:3;jenkins-hbase4:44385] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:41,758 INFO [RS:3;jenkins-hbase4:44385] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 19:15:41,758 INFO [RS:3;jenkins-hbase4:44385] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44385,1689534941662-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 19:15:41,769 INFO [RS:3;jenkins-hbase4:44385] regionserver.Replication(203): jenkins-hbase4.apache.org,44385,1689534941662 started 2023-07-16 19:15:41,769 INFO [RS:3;jenkins-hbase4:44385] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44385,1689534941662, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44385, sessionid=0x1016f8fc6fa000b 2023-07-16 19:15:41,769 DEBUG [RS:3;jenkins-hbase4:44385] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 19:15:41,769 DEBUG [RS:3;jenkins-hbase4:44385] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:41,769 DEBUG [RS:3;jenkins-hbase4:44385] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44385,1689534941662' 2023-07-16 19:15:41,769 DEBUG [RS:3;jenkins-hbase4:44385] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 19:15:41,769 DEBUG [RS:3;jenkins-hbase4:44385] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 19:15:41,770 DEBUG [RS:3;jenkins-hbase4:44385] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 19:15:41,770 DEBUG [RS:3;jenkins-hbase4:44385] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 19:15:41,770 DEBUG [RS:3;jenkins-hbase4:44385] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:41,770 DEBUG [RS:3;jenkins-hbase4:44385] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44385,1689534941662' 2023-07-16 19:15:41,770 DEBUG [RS:3;jenkins-hbase4:44385] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 19:15:41,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:41,770 DEBUG [RS:3;jenkins-hbase4:44385] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 19:15:41,770 DEBUG [RS:3;jenkins-hbase4:44385] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 19:15:41,770 INFO [RS:3;jenkins-hbase4:44385] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 19:15:41,770 INFO [RS:3;jenkins-hbase4:44385] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 19:15:41,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:41,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:41,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:41,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:41,777 DEBUG [hconnection-0x1ecadc90-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 19:15:41,779 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48340, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 19:15:41,783 DEBUG [hconnection-0x1ecadc90-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 19:15:41,786 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51334, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 19:15:41,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:41,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:41,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45041] to rsgroup master 2023-07-16 19:15:41,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:41,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:39610 deadline: 1689536141790, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. 2023-07-16 19:15:41,791 WARN [Listener at localhost/42605] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:41,792 INFO [Listener at localhost/42605] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:41,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:41,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:41,793 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39113, jenkins-hbase4.apache.org:40351, jenkins-hbase4.apache.org:41397, jenkins-hbase4.apache.org:44385], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:41,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:41,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:41,850 INFO [Listener at localhost/42605] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=564 (was 514) Potentially hanging thread: Listener at localhost/42605-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:39113Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x34d303dd-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 33491 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/42605 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1196734272@qtp-644307922-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: qtp722503952-2259 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:40351 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:43643 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_780457800_17 at /127.0.0.1:60776 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:45041 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1119635145_17 at /127.0.0.1:60808 [Receiving block BP-620611439-172.31.14.131-1689534939350:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp193422121-2305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42605-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 1 on default port 46619 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@36c8a553 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39113 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-49346ed9-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-620611439-172.31.14.131-1689534939350:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2090151062-2231 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2087659022-2571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1689234585@qtp-644307922-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40263 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1035693180_17 at /127.0.0.1:46354 [Receiving block BP-620611439-172.31.14.131-1689534939350:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@4cb6e0e8 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:36299 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data2/current/BP-620611439-172.31.14.131-1689534939350 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:44385-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1119635145_17 at /127.0.0.1:46272 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@132c392 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1035693180_17 at /127.0.0.1:46346 [Receiving block BP-620611439-172.31.14.131-1689534939350:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1ecadc90-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1825508135-2203 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x28d57238 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1425772335.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42605-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:36299 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-620611439-172.31.14.131-1689534939350:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@74ebeda4 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42605-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:36299 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/42605-SendThread(127.0.0.1:62260) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 3 on default port 36299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 4 on default port 36299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1825508135-2202 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:40351Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_780457800_17 at /127.0.0.1:58012 [Receiving block BP-620611439-172.31.14.131-1689534939350:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1119635145_17 at /127.0.0.1:57980 [Receiving block BP-620611439-172.31.14.131-1689534939350:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1825508135-2201 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:43643 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-620611439-172.31.14.131-1689534939350:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x28d57238-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x3fe534ce sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1425772335.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42605.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: qtp2087659022-2574 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39113 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x46a2470b sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1425772335.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 46619 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44385 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42605-SendThread(127.0.0.1:62260) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp193422121-2301 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-620611439-172.31.14.131-1689534939350:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2090151062-2227 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2090151062-2233 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: globalEventExecutor-1-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) io.netty.util.concurrent.GlobalEventExecutor.takeTask(GlobalEventExecutor.java:95) io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:239) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1825508135-2198 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1825508135-2197 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42605-SendThread(127.0.0.1:62260) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 3 on default port 33491 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: M:0;jenkins-hbase4:45041 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39113 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins@localhost:36299 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42605-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp722503952-2262 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@357e2feb java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 36299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp211177675-2290 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689534940667 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: qtp193422121-2298 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:36299 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp193422121-2300 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x391b8bc6-SendThread(127.0.0.1:62260) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-620611439-172.31.14.131-1689534939350:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2087659022-2568-acceptor-0@4f828ec3-ServerConnector@24e74273{HTTP/1.1, (http/1.1)}{0.0.0.0:41463} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x34d303dd-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-620611439-172.31.14.131-1689534939350 heartbeating to localhost/127.0.0.1:36299 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1825508135-2199 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=45041 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991-prefix:jenkins-hbase4.apache.org,39113,1689534940238 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp193422121-2303 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39113 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1825508135-2200 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45041 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39113 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x3fe534ce-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server idle connection scanner for port 36299 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39113 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:44385Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45041 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@2c17c851[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42605.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: 1816872216@qtp-98773099-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42035 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56571@0x5ea8809c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1425772335.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:62260 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x0b8740e4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1425772335.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data1/current/BP-620611439-172.31.14.131-1689534939350 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 33491 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/42605.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44385 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 46619 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp211177675-2291 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=45041 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2087659022-2567 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39113 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42605-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44385 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@7194d9f2[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x25c91e9a sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1425772335.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 36299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x7d87758a-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2087659022-2572 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-620611439-172.31.14.131-1689534939350:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:41397 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x34d303dd-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44385 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp193422121-2304 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36007-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44385 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x34d303dd-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991-prefix:jenkins-hbase4.apache.org,40351,1689534940288.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44385 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1724392316@qtp-636949372-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: 39044068@qtp-1469916078-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46431 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:36299 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@7f5ae173 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x3fe534ce-SendThread(127.0.0.1:62260) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp211177675-2288-acceptor-0@27a65c76-ServerConnector@b2fe258{HTTP/1.1, (http/1.1)}{0.0.0.0:44061} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x28d57238-SendThread(127.0.0.1:62260) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp2090151062-2229 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-52b39fb9-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:43643 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-620611439-172.31.14.131-1689534939350:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp722503952-2263 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_780457800_17 at /127.0.0.1:46332 [Receiving block BP-620611439-172.31.14.131-1689534939350:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x34d303dd-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp722503952-2258-acceptor-0@3aef6ed3-ServerConnector@66049cc4{HTTP/1.1, (http/1.1)}{0.0.0.0:46695} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp722503952-2257 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x34d303dd-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-620611439-172.31.14.131-1689534939350:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 46619 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 1 on default port 36299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 0 on default port 42605 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp211177675-2287 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45041 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x391b8bc6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1425772335.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x391b8bc6-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data3/current/BP-620611439-172.31.14.131-1689534939350 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:43643 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 4 on default port 33491 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:3;jenkins-hbase4:44385 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991-prefix:jenkins-hbase4.apache.org,40351,1689534940288 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1035693180_17 at /127.0.0.1:60866 [Receiving block BP-620611439-172.31.14.131-1689534939350:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-69eb0e1f-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@321cd692 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1035693180_17 at /127.0.0.1:58040 [Receiving block BP-620611439-172.31.14.131-1689534939350:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:36299 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:43643 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(465763837) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: qtp211177675-2289 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=45041 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp211177675-2294 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@4db7da14[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 42605 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@606d5d1f java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-5daf6e09-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp211177675-2293 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-620611439-172.31.14.131-1689534939350:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp722503952-2260 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x25c91e9a-SendThread(127.0.0.1:62260) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56571@0x5ea8809c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: 541915685@qtp-98773099-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/MasterData-prefix:jenkins-hbase4.apache.org,45041,1689534940170 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp193422121-2302-acceptor-0@74e88af1-ServerConnector@5409c29{HTTP/1.1, (http/1.1)}{0.0.0.0:46757} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:43643 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x34d303dd-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2087659022-2569 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41265,1689534935134 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: Listener at localhost/42605.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=45041 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x3720103e-SendThread(127.0.0.1:62260) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-787946618_17 at /127.0.0.1:58028 [Receiving block BP-620611439-172.31.14.131-1689534939350:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2090151062-2232 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@49107a66 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689534940667 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: RS:0;jenkins-hbase4:39113-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp722503952-2264 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1035693180_17 at /127.0.0.1:60852 [Receiving block BP-620611439-172.31.14.131-1689534939350:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-620611439-172.31.14.131-1689534939350:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp2090151062-2230 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1035693180_17 at /127.0.0.1:58044 [Receiving block BP-620611439-172.31.14.131-1689534939350:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42605-SendThread(127.0.0.1:62260) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39113 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-787946618_17 at /127.0.0.1:46336 [Receiving block BP-620611439-172.31.14.131-1689534939350:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-620611439-172.31.14.131-1689534939350:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x0b8740e4-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server idle connection scanner for port 42605 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x3720103e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-787946618_17 at /127.0.0.1:60850 [Receiving block BP-620611439-172.31.14.131-1689534939350:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 722558439@qtp-1469916078-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data4/current/BP-620611439-172.31.14.131-1689534939350 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 46619 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp2087659022-2573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x46a2470b-SendThread(127.0.0.1:62260) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:2;jenkins-hbase4:41397-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-620611439-172.31.14.131-1689534939350:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:36299 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp722503952-2261 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 42605 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x46a2470b-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44385 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp211177675-2292 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 33491 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Session-HouseKeeper-443e44f8-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-620611439-172.31.14.131-1689534939350:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-620611439-172.31.14.131-1689534939350 heartbeating to localhost/127.0.0.1:36299 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@1a24290f sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp193422121-2299 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1060732671.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:40351-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991-prefix:jenkins-hbase4.apache.org,41397,1689534940338 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56571@0x5ea8809c-SendThread(127.0.0.1:56571) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x3720103e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1425772335.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 42605 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x0b8740e4-SendThread(127.0.0.1:62260) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1825508135-2196-acceptor-0@d9efa18-ServerConnector@22366487{HTTP/1.1, (http/1.1)}{0.0.0.0:43263} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1119635145_17 at /127.0.0.1:46304 [Receiving block BP-620611439-172.31.14.131-1689534939350:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:62260): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36007-SendThread(127.0.0.1:56571) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:43643 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@52b8cdb0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 46619 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:43643 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:39113 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 33491 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/42605-SendThread(127.0.0.1:62260) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase4:41397Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-620611439-172.31.14.131-1689534939350 heartbeating to localhost/127.0.0.1:36299 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@30ad32c0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1ecadc90-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@575948b0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:36299 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 711757251@qtp-636949372-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46417 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp2090151062-2234 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x34d303dd-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data6/current/BP-620611439-172.31.14.131-1689534939350 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44385 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_780457800_17 at /127.0.0.1:60842 [Receiving block BP-620611439-172.31.14.131-1689534939350:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40351 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41397 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62260@0x25c91e9a-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=45041 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/42605-SendThread(127.0.0.1:62260) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44385 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=45041 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2090151062-2228-acceptor-0@7ba085ac-ServerConnector@30ce9388{HTTP/1.1, (http/1.1)}{0.0.0.0:38123} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39113 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39113 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data5/current/BP-620611439-172.31.14.131-1689534939350 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44385 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:36299 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:43643 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-787946618_17 at /127.0.0.1:58042 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2087659022-2570 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-620611439-172.31.14.131-1689534939350:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45041,1689534940170 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: PacketResponder: BP-620611439-172.31.14.131-1689534939350:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 42605 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) - Thread LEAK? -, OpenFileDescriptor=835 (was 791) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=432 (was 482), ProcessCount=172 (was 172), AvailableMemoryMB=2569 (was 2729) 2023-07-16 19:15:41,853 WARN [Listener at localhost/42605] hbase.ResourceChecker(130): Thread=564 is superior to 500 2023-07-16 19:15:41,871 INFO [Listener at localhost/42605] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=563, OpenFileDescriptor=835, MaxFileDescriptor=60000, SystemLoadAverage=432, ProcessCount=172, AvailableMemoryMB=2567 2023-07-16 19:15:41,871 WARN [Listener at localhost/42605] hbase.ResourceChecker(130): Thread=563 is superior to 500 2023-07-16 19:15:41,871 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-16 19:15:41,872 INFO [RS:3;jenkins-hbase4:44385] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44385%2C1689534941662, suffix=, logDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/WALs/jenkins-hbase4.apache.org,44385,1689534941662, archiveDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/oldWALs, maxLogs=32 2023-07-16 19:15:41,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:41,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:41,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:41,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:41,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:41,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:41,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:41,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:41,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:41,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:41,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:41,890 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:41,892 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46173,DS-26cdaadd-d4c9-4440-9c42-8c17efeae96a,DISK] 2023-07-16 19:15:41,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:41,892 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35497,DS-503ebb63-77f1-4b62-82ab-43e16f8d85a0,DISK] 2023-07-16 19:15:41,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:41,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:41,896 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39707,DS-660a3e9a-7b82-4408-a9f8-afc6895684e3,DISK] 2023-07-16 19:15:41,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:41,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:41,900 INFO [RS:3;jenkins-hbase4:44385] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/WALs/jenkins-hbase4.apache.org,44385,1689534941662/jenkins-hbase4.apache.org%2C44385%2C1689534941662.1689534941873 2023-07-16 19:15:41,900 DEBUG [RS:3;jenkins-hbase4:44385] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46173,DS-26cdaadd-d4c9-4440-9c42-8c17efeae96a,DISK], DatanodeInfoWithStorage[127.0.0.1:39707,DS-660a3e9a-7b82-4408-a9f8-afc6895684e3,DISK], DatanodeInfoWithStorage[127.0.0.1:35497,DS-503ebb63-77f1-4b62-82ab-43e16f8d85a0,DISK]] 2023-07-16 19:15:41,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:41,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:41,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45041] to rsgroup master 2023-07-16 19:15:41,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:41,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:39610 deadline: 1689536141904, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. 2023-07-16 19:15:41,905 WARN [Listener at localhost/42605] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:41,906 INFO [Listener at localhost/42605] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:41,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:41,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:41,907 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39113, jenkins-hbase4.apache.org:40351, jenkins-hbase4.apache.org:41397, jenkins-hbase4.apache.org:44385], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:41,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:41,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:41,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:41,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-16 19:15:41,911 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:41,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-16 19:15:41,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-16 19:15:41,912 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:41,912 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:41,913 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:41,915 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 19:15:41,916 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/default/t1/36b67c51dfb3ec8e0ea0c712516fce7f 2023-07-16 19:15:41,916 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/default/t1/36b67c51dfb3ec8e0ea0c712516fce7f empty. 2023-07-16 19:15:41,917 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/default/t1/36b67c51dfb3ec8e0ea0c712516fce7f 2023-07-16 19:15:41,917 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-16 19:15:41,928 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-16 19:15:41,929 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 36b67c51dfb3ec8e0ea0c712516fce7f, NAME => 't1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp 2023-07-16 19:15:41,937 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:41,938 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 36b67c51dfb3ec8e0ea0c712516fce7f, disabling compactions & flushes 2023-07-16 19:15:41,938 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f. 2023-07-16 19:15:41,938 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f. 2023-07-16 19:15:41,938 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f. after waiting 0 ms 2023-07-16 19:15:41,938 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f. 2023-07-16 19:15:41,938 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f. 2023-07-16 19:15:41,938 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 36b67c51dfb3ec8e0ea0c712516fce7f: 2023-07-16 19:15:41,940 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 19:15:41,940 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689534941940"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534941940"}]},"ts":"1689534941940"} 2023-07-16 19:15:41,942 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 19:15:41,942 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 19:15:41,942 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534941942"}]},"ts":"1689534941942"} 2023-07-16 19:15:41,943 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-16 19:15:41,947 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 19:15:41,947 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 19:15:41,947 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 19:15:41,947 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 19:15:41,947 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-16 19:15:41,947 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 19:15:41,947 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=36b67c51dfb3ec8e0ea0c712516fce7f, ASSIGN}] 2023-07-16 19:15:41,948 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=36b67c51dfb3ec8e0ea0c712516fce7f, ASSIGN 2023-07-16 19:15:41,949 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=36b67c51dfb3ec8e0ea0c712516fce7f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44385,1689534941662; forceNewPlan=false, retain=false 2023-07-16 19:15:42,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-16 19:15:42,099 INFO [jenkins-hbase4:45041] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 19:15:42,100 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=36b67c51dfb3ec8e0ea0c712516fce7f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:42,101 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689534942100"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534942100"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534942100"}]},"ts":"1689534942100"} 2023-07-16 19:15:42,102 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 36b67c51dfb3ec8e0ea0c712516fce7f, server=jenkins-hbase4.apache.org,44385,1689534941662}] 2023-07-16 19:15:42,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-16 19:15:42,255 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:42,255 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 19:15:42,256 INFO [RS-EventLoopGroup-16-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49722, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 19:15:42,260 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f. 2023-07-16 19:15:42,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 36b67c51dfb3ec8e0ea0c712516fce7f, NAME => 't1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f.', STARTKEY => '', ENDKEY => ''} 2023-07-16 19:15:42,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 36b67c51dfb3ec8e0ea0c712516fce7f 2023-07-16 19:15:42,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 19:15:42,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 36b67c51dfb3ec8e0ea0c712516fce7f 2023-07-16 19:15:42,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 36b67c51dfb3ec8e0ea0c712516fce7f 2023-07-16 19:15:42,262 INFO [StoreOpener-36b67c51dfb3ec8e0ea0c712516fce7f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 36b67c51dfb3ec8e0ea0c712516fce7f 2023-07-16 19:15:42,263 DEBUG [StoreOpener-36b67c51dfb3ec8e0ea0c712516fce7f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/default/t1/36b67c51dfb3ec8e0ea0c712516fce7f/cf1 2023-07-16 19:15:42,263 DEBUG [StoreOpener-36b67c51dfb3ec8e0ea0c712516fce7f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/default/t1/36b67c51dfb3ec8e0ea0c712516fce7f/cf1 2023-07-16 19:15:42,264 INFO [StoreOpener-36b67c51dfb3ec8e0ea0c712516fce7f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 36b67c51dfb3ec8e0ea0c712516fce7f columnFamilyName cf1 2023-07-16 19:15:42,264 INFO [StoreOpener-36b67c51dfb3ec8e0ea0c712516fce7f-1] regionserver.HStore(310): Store=36b67c51dfb3ec8e0ea0c712516fce7f/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 19:15:42,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/default/t1/36b67c51dfb3ec8e0ea0c712516fce7f 2023-07-16 19:15:42,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/default/t1/36b67c51dfb3ec8e0ea0c712516fce7f 2023-07-16 19:15:42,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 36b67c51dfb3ec8e0ea0c712516fce7f 2023-07-16 19:15:42,271 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/default/t1/36b67c51dfb3ec8e0ea0c712516fce7f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 19:15:42,272 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 36b67c51dfb3ec8e0ea0c712516fce7f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9587070400, jitterRate=-0.10713449120521545}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 19:15:42,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 36b67c51dfb3ec8e0ea0c712516fce7f: 2023-07-16 19:15:42,272 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f., pid=14, masterSystemTime=1689534942255 2023-07-16 19:15:42,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f. 2023-07-16 19:15:42,277 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f. 2023-07-16 19:15:42,277 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=36b67c51dfb3ec8e0ea0c712516fce7f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:42,278 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689534942277"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689534942277"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689534942277"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689534942277"}]},"ts":"1689534942277"} 2023-07-16 19:15:42,281 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-16 19:15:42,281 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 36b67c51dfb3ec8e0ea0c712516fce7f, server=jenkins-hbase4.apache.org,44385,1689534941662 in 177 msec 2023-07-16 19:15:42,282 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-16 19:15:42,282 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=36b67c51dfb3ec8e0ea0c712516fce7f, ASSIGN in 334 msec 2023-07-16 19:15:42,283 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 19:15:42,283 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534942283"}]},"ts":"1689534942283"} 2023-07-16 19:15:42,285 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-16 19:15:42,287 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 19:15:42,288 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 378 msec 2023-07-16 19:15:42,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-16 19:15:42,515 INFO [Listener at localhost/42605] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-16 19:15:42,515 DEBUG [Listener at localhost/42605] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-16 19:15:42,515 INFO [Listener at localhost/42605] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:42,517 INFO [Listener at localhost/42605] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-16 19:15:42,518 INFO [Listener at localhost/42605] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:42,518 INFO [Listener at localhost/42605] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-16 19:15:42,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 19:15:42,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-16 19:15:42,525 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 19:15:42,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-16 19:15:42,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:39610 deadline: 1689535002519, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-16 19:15:42,528 INFO [Listener at localhost/42605] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:42,530 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=9 msec 2023-07-16 19:15:42,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:42,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:42,630 INFO [Listener at localhost/42605] client.HBaseAdmin$15(890): Started disable of t1 2023-07-16 19:15:42,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-16 19:15:42,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-16 19:15:42,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 19:15:42,635 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534942635"}]},"ts":"1689534942635"} 2023-07-16 19:15:42,637 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-16 19:15:42,638 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-16 19:15:42,639 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=36b67c51dfb3ec8e0ea0c712516fce7f, UNASSIGN}] 2023-07-16 19:15:42,640 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=36b67c51dfb3ec8e0ea0c712516fce7f, UNASSIGN 2023-07-16 19:15:42,641 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=36b67c51dfb3ec8e0ea0c712516fce7f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:42,641 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689534942641"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689534942641"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689534942641"}]},"ts":"1689534942641"} 2023-07-16 19:15:42,642 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 36b67c51dfb3ec8e0ea0c712516fce7f, server=jenkins-hbase4.apache.org,44385,1689534941662}] 2023-07-16 19:15:42,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 19:15:42,795 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 36b67c51dfb3ec8e0ea0c712516fce7f 2023-07-16 19:15:42,795 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 36b67c51dfb3ec8e0ea0c712516fce7f, disabling compactions & flushes 2023-07-16 19:15:42,795 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f. 2023-07-16 19:15:42,795 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f. 2023-07-16 19:15:42,795 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f. after waiting 0 ms 2023-07-16 19:15:42,795 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f. 2023-07-16 19:15:42,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/default/t1/36b67c51dfb3ec8e0ea0c712516fce7f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 19:15:42,802 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f. 2023-07-16 19:15:42,802 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 36b67c51dfb3ec8e0ea0c712516fce7f: 2023-07-16 19:15:42,805 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 36b67c51dfb3ec8e0ea0c712516fce7f 2023-07-16 19:15:42,806 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=36b67c51dfb3ec8e0ea0c712516fce7f, regionState=CLOSED 2023-07-16 19:15:42,806 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689534942806"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689534942806"}]},"ts":"1689534942806"} 2023-07-16 19:15:42,809 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-16 19:15:42,809 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 36b67c51dfb3ec8e0ea0c712516fce7f, server=jenkins-hbase4.apache.org,44385,1689534941662 in 165 msec 2023-07-16 19:15:42,811 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-16 19:15:42,811 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=36b67c51dfb3ec8e0ea0c712516fce7f, UNASSIGN in 170 msec 2023-07-16 19:15:42,812 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689534942812"}]},"ts":"1689534942812"} 2023-07-16 19:15:42,813 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-16 19:15:42,815 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-16 19:15:42,819 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 186 msec 2023-07-16 19:15:42,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 19:15:42,937 INFO [Listener at localhost/42605] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-16 19:15:42,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-16 19:15:42,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-16 19:15:42,952 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-16 19:15:42,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-16 19:15:42,954 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-16 19:15:42,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:42,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:42,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:42,961 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/default/t1/36b67c51dfb3ec8e0ea0c712516fce7f 2023-07-16 19:15:42,963 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/default/t1/36b67c51dfb3ec8e0ea0c712516fce7f/cf1, FileablePath, hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/default/t1/36b67c51dfb3ec8e0ea0c712516fce7f/recovered.edits] 2023-07-16 19:15:42,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-16 19:15:42,971 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/default/t1/36b67c51dfb3ec8e0ea0c712516fce7f/recovered.edits/4.seqid to hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/archive/data/default/t1/36b67c51dfb3ec8e0ea0c712516fce7f/recovered.edits/4.seqid 2023-07-16 19:15:42,973 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/.tmp/data/default/t1/36b67c51dfb3ec8e0ea0c712516fce7f 2023-07-16 19:15:42,973 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-16 19:15:42,976 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-16 19:15:42,978 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-16 19:15:42,979 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-16 19:15:42,981 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-16 19:15:42,981 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-16 19:15:42,981 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689534942981"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:42,982 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 19:15:42,982 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 36b67c51dfb3ec8e0ea0c712516fce7f, NAME => 't1,,1689534941908.36b67c51dfb3ec8e0ea0c712516fce7f.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 19:15:42,982 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-16 19:15:42,982 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689534942982"}]},"ts":"9223372036854775807"} 2023-07-16 19:15:42,983 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-16 19:15:42,985 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-16 19:15:42,995 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 46 msec 2023-07-16 19:15:43,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-16 19:15:43,072 INFO [Listener at localhost/42605] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-16 19:15:43,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:43,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:43,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:43,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:43,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:43,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:43,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:43,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:43,090 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:43,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:43,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:43,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:43,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:43,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45041] to rsgroup master 2023-07-16 19:15:43,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:43,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:39610 deadline: 1689536143103, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. 2023-07-16 19:15:43,104 WARN [Listener at localhost/42605] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:43,108 INFO [Listener at localhost/42605] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:43,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,109 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39113, jenkins-hbase4.apache.org:40351, jenkins-hbase4.apache.org:41397, jenkins-hbase4.apache.org:44385], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:43,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:43,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:43,131 INFO [Listener at localhost/42605] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=571 (was 563) - Thread LEAK? -, OpenFileDescriptor=839 (was 835) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=432 (was 432), ProcessCount=172 (was 172), AvailableMemoryMB=2403 (was 2567) 2023-07-16 19:15:43,131 WARN [Listener at localhost/42605] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-16 19:15:43,152 INFO [Listener at localhost/42605] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=571, OpenFileDescriptor=839, MaxFileDescriptor=60000, SystemLoadAverage=432, ProcessCount=172, AvailableMemoryMB=2403 2023-07-16 19:15:43,152 WARN [Listener at localhost/42605] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-16 19:15:43,153 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-16 19:15:43,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:43,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:43,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:43,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:43,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:43,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:43,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:43,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:43,168 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:43,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:43,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:43,172 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:43,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:43,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,180 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45041] to rsgroup master 2023-07-16 19:15:43,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:43,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39610 deadline: 1689536143180, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. 2023-07-16 19:15:43,180 WARN [Listener at localhost/42605] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:43,182 INFO [Listener at localhost/42605] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:43,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,183 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39113, jenkins-hbase4.apache.org:40351, jenkins-hbase4.apache.org:41397, jenkins-hbase4.apache.org:44385], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:43,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:43,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:43,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-16 19:15:43,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:43,185 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-16 19:15:43,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-16 19:15:43,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 19:15:43,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:43,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:43,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:43,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:43,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:43,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:43,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:43,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:43,204 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:43,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:43,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:43,209 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:43,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:43,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45041] to rsgroup master 2023-07-16 19:15:43,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:43,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39610 deadline: 1689536143228, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. 2023-07-16 19:15:43,230 WARN [Listener at localhost/42605] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:43,232 INFO [Listener at localhost/42605] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:43,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,233 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39113, jenkins-hbase4.apache.org:40351, jenkins-hbase4.apache.org:41397, jenkins-hbase4.apache.org:44385], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:43,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:43,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:43,268 INFO [Listener at localhost/42605] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=573 (was 571) - Thread LEAK? -, OpenFileDescriptor=839 (was 839), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=432 (was 432), ProcessCount=172 (was 172), AvailableMemoryMB=2354 (was 2403) 2023-07-16 19:15:43,268 WARN [Listener at localhost/42605] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-16 19:15:43,306 INFO [Listener at localhost/42605] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=573, OpenFileDescriptor=839, MaxFileDescriptor=60000, SystemLoadAverage=432, ProcessCount=172, AvailableMemoryMB=2317 2023-07-16 19:15:43,306 WARN [Listener at localhost/42605] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-16 19:15:43,306 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-16 19:15:43,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:43,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:43,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:43,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:43,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:43,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:43,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:43,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:43,323 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:43,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:43,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:43,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:43,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:43,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45041] to rsgroup master 2023-07-16 19:15:43,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:43,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39610 deadline: 1689536143334, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. 2023-07-16 19:15:43,335 WARN [Listener at localhost/42605] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:43,337 INFO [Listener at localhost/42605] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:43,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,338 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39113, jenkins-hbase4.apache.org:40351, jenkins-hbase4.apache.org:41397, jenkins-hbase4.apache.org:44385], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:43,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:43,339 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:43,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:43,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:43,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:43,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:43,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:43,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:43,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:43,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:43,368 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:43,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:43,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:43,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:43,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:43,377 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,377 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45041] to rsgroup master 2023-07-16 19:15:43,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:43,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39610 deadline: 1689536143379, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. 2023-07-16 19:15:43,380 WARN [Listener at localhost/42605] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:43,382 INFO [Listener at localhost/42605] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:43,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,383 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39113, jenkins-hbase4.apache.org:40351, jenkins-hbase4.apache.org:41397, jenkins-hbase4.apache.org:44385], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:43,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:43,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:43,448 INFO [Listener at localhost/42605] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=574 (was 573) - Thread LEAK? -, OpenFileDescriptor=839 (was 839), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=432 (was 432), ProcessCount=172 (was 172), AvailableMemoryMB=2295 (was 2317) 2023-07-16 19:15:43,448 WARN [Listener at localhost/42605] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-16 19:15:43,469 INFO [Listener at localhost/42605] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=574, OpenFileDescriptor=839, MaxFileDescriptor=60000, SystemLoadAverage=432, ProcessCount=172, AvailableMemoryMB=2293 2023-07-16 19:15:43,469 WARN [Listener at localhost/42605] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-16 19:15:43,470 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-16 19:15:43,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:43,475 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:43,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:43,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:43,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:43,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:43,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:43,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:43,484 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:43,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:43,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:43,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:43,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:43,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45041] to rsgroup master 2023-07-16 19:15:43,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:43,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39610 deadline: 1689536143495, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. 2023-07-16 19:15:43,496 WARN [Listener at localhost/42605] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:43,498 INFO [Listener at localhost/42605] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:43,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,499 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39113, jenkins-hbase4.apache.org:40351, jenkins-hbase4.apache.org:41397, jenkins-hbase4.apache.org:44385], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:43,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:43,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:43,500 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-16 19:15:43,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-16 19:15:43,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-16 19:15:43,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:43,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 19:15:43,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:43,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,519 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-16 19:15:43,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-16 19:15:43,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 19:15:43,528 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 19:15:43,531 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 11 msec 2023-07-16 19:15:43,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 19:15:43,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-16 19:15:43,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:43,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:39610 deadline: 1689536143626, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-16 19:15:43,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-16 19:15:43,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-16 19:15:43,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 19:15:43,660 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-16 19:15:43,662 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 25 msec 2023-07-16 19:15:43,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 19:15:43,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-16 19:15:43,763 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-16 19:15:43,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-16 19:15:43,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:43,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 19:15:43,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:43,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-16 19:15:43,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 19:15:43,783 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 19:15:43,785 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 19:15:43,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-16 19:15:43,787 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 19:15:43,788 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-16 19:15:43,788 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 19:15:43,789 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 19:15:43,791 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 19:15:43,792 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-16 19:15:43,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-16 19:15:43,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-16 19:15:43,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-16 19:15:43,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:43,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-16 19:15:43,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:43,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:43,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:39610 deadline: 1689535003898, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-16 19:15:43,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:43,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:43,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:43,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:43,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:43,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-16 19:15:43,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:43,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 19:15:43,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:43,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 19:15:43,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 19:15:43,908 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 19:15:43,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 19:15:43,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 19:15:43,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 19:15:43,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 19:15:43,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 19:15:43,916 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 19:15:43,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 19:15:43,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 19:15:43,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 19:15:43,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 19:15:43,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 19:15:43,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45041] to rsgroup master 2023-07-16 19:15:43,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 19:15:43,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39610 deadline: 1689536143924, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. 2023-07-16 19:15:43,925 WARN [Listener at localhost/42605] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45041 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 19:15:43,927 INFO [Listener at localhost/42605] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 19:15:43,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 19:15:43,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 19:15:43,928 INFO [Listener at localhost/42605] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:39113, jenkins-hbase4.apache.org:40351, jenkins-hbase4.apache.org:41397, jenkins-hbase4.apache.org:44385], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 19:15:43,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 19:15:43,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45041] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 19:15:43,948 INFO [Listener at localhost/42605] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=574 (was 574), OpenFileDescriptor=839 (was 839), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=413 (was 432), ProcessCount=172 (was 172), AvailableMemoryMB=2283 (was 2293) 2023-07-16 19:15:43,948 WARN [Listener at localhost/42605] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-16 19:15:43,949 INFO [Listener at localhost/42605] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-16 19:15:43,949 INFO [Listener at localhost/42605] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-16 19:15:43,949 DEBUG [Listener at localhost/42605] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3720103e to 127.0.0.1:62260 2023-07-16 19:15:43,949 DEBUG [Listener at localhost/42605] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:43,949 DEBUG [Listener at localhost/42605] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-16 19:15:43,949 DEBUG [Listener at localhost/42605] util.JVMClusterUtil(257): Found active master hash=399342591, stopped=false 2023-07-16 19:15:43,949 DEBUG [Listener at localhost/42605] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 19:15:43,949 DEBUG [Listener at localhost/42605] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 19:15:43,949 INFO [Listener at localhost/42605] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,45041,1689534940170 2023-07-16 19:15:43,951 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:43,951 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:43,951 INFO [Listener at localhost/42605] procedure2.ProcedureExecutor(629): Stopping 2023-07-16 19:15:43,951 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:43,951 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:44385-0x1016f8fc6fa000b, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:43,951 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 19:15:43,951 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:43,952 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:43,952 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:43,952 DEBUG [Listener at localhost/42605] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3fe534ce to 127.0.0.1:62260 2023-07-16 19:15:43,952 DEBUG [Listener at localhost/42605] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:43,952 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44385-0x1016f8fc6fa000b, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:43,952 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:43,952 INFO [Listener at localhost/42605] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39113,1689534940238' ***** 2023-07-16 19:15:43,952 INFO [Listener at localhost/42605] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 19:15:43,952 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 19:15:43,952 INFO [Listener at localhost/42605] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40351,1689534940288' ***** 2023-07-16 19:15:43,952 INFO [Listener at localhost/42605] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 19:15:43,952 INFO [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 19:15:43,952 INFO [Listener at localhost/42605] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41397,1689534940338' ***** 2023-07-16 19:15:43,953 INFO [Listener at localhost/42605] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 19:15:43,952 INFO [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 19:15:43,953 INFO [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 19:15:43,953 INFO [Listener at localhost/42605] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44385,1689534941662' ***** 2023-07-16 19:15:43,956 INFO [Listener at localhost/42605] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 19:15:43,956 INFO [RS:3;jenkins-hbase4:44385] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 19:15:43,959 INFO [RS:0;jenkins-hbase4:39113] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2ab355a6{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:43,959 INFO [RS:1;jenkins-hbase4:40351] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@24fd7fc7{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:43,960 INFO [RS:0;jenkins-hbase4:39113] server.AbstractConnector(383): Stopped ServerConnector@30ce9388{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:43,960 INFO [RS:1;jenkins-hbase4:40351] server.AbstractConnector(383): Stopped ServerConnector@66049cc4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:43,960 INFO [RS:0;jenkins-hbase4:39113] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 19:15:43,960 INFO [RS:3;jenkins-hbase4:44385] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@a9db2d1{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:43,960 INFO [RS:2;jenkins-hbase4:41397] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@164f60c8{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 19:15:43,961 INFO [RS:0;jenkins-hbase4:39113] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39bfefc6{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 19:15:43,960 INFO [RS:1;jenkins-hbase4:40351] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 19:15:43,962 INFO [RS:3;jenkins-hbase4:44385] server.AbstractConnector(383): Stopped ServerConnector@24e74273{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:43,962 INFO [RS:2;jenkins-hbase4:41397] server.AbstractConnector(383): Stopped ServerConnector@b2fe258{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:43,962 INFO [RS:0;jenkins-hbase4:39113] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@666a8c86{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/hadoop.log.dir/,STOPPED} 2023-07-16 19:15:43,962 INFO [RS:1;jenkins-hbase4:40351] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4ceb7e75{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 19:15:43,962 INFO [RS:2;jenkins-hbase4:41397] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 19:15:43,963 INFO [RS:1;jenkins-hbase4:40351] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@730a2ea8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/hadoop.log.dir/,STOPPED} 2023-07-16 19:15:43,962 INFO [RS:3;jenkins-hbase4:44385] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 19:15:43,964 INFO [RS:2;jenkins-hbase4:41397] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@15be34c0{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 19:15:43,964 INFO [RS:0;jenkins-hbase4:39113] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 19:15:43,965 INFO [RS:2;jenkins-hbase4:41397] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@12ca8bea{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/hadoop.log.dir/,STOPPED} 2023-07-16 19:15:43,966 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 19:15:43,966 INFO [RS:1;jenkins-hbase4:40351] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 19:15:43,966 INFO [RS:0;jenkins-hbase4:39113] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 19:15:43,966 INFO [RS:3;jenkins-hbase4:44385] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5da1de58{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 19:15:43,966 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 19:15:43,966 INFO [RS:0;jenkins-hbase4:39113] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 19:15:43,966 INFO [RS:1;jenkins-hbase4:40351] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 19:15:43,967 INFO [RS:1;jenkins-hbase4:40351] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 19:15:43,967 INFO [RS:2;jenkins-hbase4:41397] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 19:15:43,967 INFO [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer(3305): Received CLOSE for 18d5689b5e7f387028c52f78091d478c 2023-07-16 19:15:43,967 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 19:15:43,967 INFO [RS:3;jenkins-hbase4:44385] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@478f422{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/hadoop.log.dir/,STOPPED} 2023-07-16 19:15:43,967 INFO [RS:2;jenkins-hbase4:41397] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 19:15:43,967 INFO [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40351,1689534940288 2023-07-16 19:15:43,968 INFO [RS:2;jenkins-hbase4:41397] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 19:15:43,968 DEBUG [RS:1;jenkins-hbase4:40351] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x25c91e9a to 127.0.0.1:62260 2023-07-16 19:15:43,968 INFO [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer(3305): Received CLOSE for 24daec907e2a633f9c226ca8f9560ed7 2023-07-16 19:15:43,968 INFO [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:43,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 18d5689b5e7f387028c52f78091d478c, disabling compactions & flushes 2023-07-16 19:15:43,968 DEBUG [RS:1;jenkins-hbase4:40351] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:43,968 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c. 2023-07-16 19:15:43,968 INFO [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:43,968 DEBUG [RS:2;jenkins-hbase4:41397] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x391b8bc6 to 127.0.0.1:62260 2023-07-16 19:15:43,968 DEBUG [RS:0;jenkins-hbase4:39113] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0b8740e4 to 127.0.0.1:62260 2023-07-16 19:15:43,968 INFO [RS:3;jenkins-hbase4:44385] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 19:15:43,968 DEBUG [RS:2;jenkins-hbase4:41397] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:43,968 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 19:15:43,968 INFO [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 19:15:43,968 DEBUG [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer(1478): Online Regions={24daec907e2a633f9c226ca8f9560ed7=hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7.} 2023-07-16 19:15:43,968 DEBUG [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer(1504): Waiting on 24daec907e2a633f9c226ca8f9560ed7 2023-07-16 19:15:43,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c. 2023-07-16 19:15:43,968 INFO [RS:1;jenkins-hbase4:40351] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 19:15:43,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c. after waiting 0 ms 2023-07-16 19:15:43,968 INFO [RS:3;jenkins-hbase4:44385] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 19:15:43,968 DEBUG [RS:0;jenkins-hbase4:39113] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:43,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 24daec907e2a633f9c226ca8f9560ed7, disabling compactions & flushes 2023-07-16 19:15:43,969 INFO [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 19:15:43,969 INFO [RS:3;jenkins-hbase4:44385] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 19:15:43,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c. 2023-07-16 19:15:43,969 INFO [RS:1;jenkins-hbase4:40351] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 19:15:43,969 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 18d5689b5e7f387028c52f78091d478c 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-16 19:15:43,969 INFO [RS:3;jenkins-hbase4:44385] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:43,969 DEBUG [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer(1478): Online Regions={18d5689b5e7f387028c52f78091d478c=hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c.} 2023-07-16 19:15:43,969 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7. 2023-07-16 19:15:43,969 DEBUG [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer(1504): Waiting on 18d5689b5e7f387028c52f78091d478c 2023-07-16 19:15:43,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7. 2023-07-16 19:15:43,969 DEBUG [RS:3;jenkins-hbase4:44385] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x28d57238 to 127.0.0.1:62260 2023-07-16 19:15:43,969 DEBUG [RS:3;jenkins-hbase4:44385] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:43,969 INFO [RS:3;jenkins-hbase4:44385] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44385,1689534941662; all regions closed. 2023-07-16 19:15:43,969 INFO [RS:1;jenkins-hbase4:40351] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 19:15:43,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7. after waiting 0 ms 2023-07-16 19:15:43,970 INFO [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-16 19:15:43,970 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7. 2023-07-16 19:15:43,970 INFO [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 19:15:43,970 DEBUG [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-16 19:15:43,970 DEBUG [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-16 19:15:43,970 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 24daec907e2a633f9c226ca8f9560ed7 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-16 19:15:43,970 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 19:15:43,970 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 19:15:43,970 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 19:15:43,970 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 19:15:43,970 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 19:15:43,971 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-16 19:15:43,975 DEBUG [RS:3;jenkins-hbase4:44385] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/oldWALs 2023-07-16 19:15:43,975 INFO [RS:3;jenkins-hbase4:44385] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44385%2C1689534941662:(num 1689534941873) 2023-07-16 19:15:43,975 DEBUG [RS:3;jenkins-hbase4:44385] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:43,975 INFO [RS:3;jenkins-hbase4:44385] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:43,975 INFO [RS:3;jenkins-hbase4:44385] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 19:15:43,975 INFO [RS:3;jenkins-hbase4:44385] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 19:15:43,975 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 19:15:43,975 INFO [RS:3;jenkins-hbase4:44385] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 19:15:43,975 INFO [RS:3;jenkins-hbase4:44385] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 19:15:43,976 INFO [RS:3;jenkins-hbase4:44385] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44385 2023-07-16 19:15:43,989 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/.tmp/info/950c24b3553543b191429911d9316f5a 2023-07-16 19:15:43,989 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/rsgroup/24daec907e2a633f9c226ca8f9560ed7/.tmp/m/90a5eedbd3d04b7eb285f4251190000e 2023-07-16 19:15:43,990 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/namespace/18d5689b5e7f387028c52f78091d478c/.tmp/info/4f974dcdc4524d79aaac9e5bdf159b2b 2023-07-16 19:15:43,995 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 950c24b3553543b191429911d9316f5a 2023-07-16 19:15:43,995 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 90a5eedbd3d04b7eb285f4251190000e 2023-07-16 19:15:43,996 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4f974dcdc4524d79aaac9e5bdf159b2b 2023-07-16 19:15:43,996 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/rsgroup/24daec907e2a633f9c226ca8f9560ed7/.tmp/m/90a5eedbd3d04b7eb285f4251190000e as hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/rsgroup/24daec907e2a633f9c226ca8f9560ed7/m/90a5eedbd3d04b7eb285f4251190000e 2023-07-16 19:15:43,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/namespace/18d5689b5e7f387028c52f78091d478c/.tmp/info/4f974dcdc4524d79aaac9e5bdf159b2b as hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/namespace/18d5689b5e7f387028c52f78091d478c/info/4f974dcdc4524d79aaac9e5bdf159b2b 2023-07-16 19:15:44,002 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 90a5eedbd3d04b7eb285f4251190000e 2023-07-16 19:15:44,002 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/rsgroup/24daec907e2a633f9c226ca8f9560ed7/m/90a5eedbd3d04b7eb285f4251190000e, entries=12, sequenceid=29, filesize=5.4 K 2023-07-16 19:15:44,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4f974dcdc4524d79aaac9e5bdf159b2b 2023-07-16 19:15:44,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/namespace/18d5689b5e7f387028c52f78091d478c/info/4f974dcdc4524d79aaac9e5bdf159b2b, entries=3, sequenceid=9, filesize=5.0 K 2023-07-16 19:15:44,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 24daec907e2a633f9c226ca8f9560ed7 in 34ms, sequenceid=29, compaction requested=false 2023-07-16 19:15:44,004 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-16 19:15:44,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 18d5689b5e7f387028c52f78091d478c in 35ms, sequenceid=9, compaction requested=false 2023-07-16 19:15:44,005 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-16 19:15:44,011 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/.tmp/rep_barrier/e27129acb35543e3a43ef26565f16431 2023-07-16 19:15:44,021 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e27129acb35543e3a43ef26565f16431 2023-07-16 19:15:44,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/namespace/18d5689b5e7f387028c52f78091d478c/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-16 19:15:44,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/rsgroup/24daec907e2a633f9c226ca8f9560ed7/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-16 19:15:44,023 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c. 2023-07-16 19:15:44,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 18d5689b5e7f387028c52f78091d478c: 2023-07-16 19:15:44,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 19:15:44,023 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7. 2023-07-16 19:15:44,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689534941141.18d5689b5e7f387028c52f78091d478c. 2023-07-16 19:15:44,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 24daec907e2a633f9c226ca8f9560ed7: 2023-07-16 19:15:44,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689534941265.24daec907e2a633f9c226ca8f9560ed7. 2023-07-16 19:15:44,031 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/.tmp/table/e372c0373ad94b47a7da1e6e100c6e5a 2023-07-16 19:15:44,033 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:44,035 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e372c0373ad94b47a7da1e6e100c6e5a 2023-07-16 19:15:44,036 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/.tmp/info/950c24b3553543b191429911d9316f5a as hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/info/950c24b3553543b191429911d9316f5a 2023-07-16 19:15:44,041 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 950c24b3553543b191429911d9316f5a 2023-07-16 19:15:44,041 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/info/950c24b3553543b191429911d9316f5a, entries=22, sequenceid=26, filesize=7.3 K 2023-07-16 19:15:44,041 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:44,042 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/.tmp/rep_barrier/e27129acb35543e3a43ef26565f16431 as hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/rep_barrier/e27129acb35543e3a43ef26565f16431 2023-07-16 19:15:44,043 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:44,047 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e27129acb35543e3a43ef26565f16431 2023-07-16 19:15:44,047 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/rep_barrier/e27129acb35543e3a43ef26565f16431, entries=1, sequenceid=26, filesize=4.9 K 2023-07-16 19:15:44,048 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/.tmp/table/e372c0373ad94b47a7da1e6e100c6e5a as hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/table/e372c0373ad94b47a7da1e6e100c6e5a 2023-07-16 19:15:44,052 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e372c0373ad94b47a7da1e6e100c6e5a 2023-07-16 19:15:44,052 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/table/e372c0373ad94b47a7da1e6e100c6e5a, entries=6, sequenceid=26, filesize=5.1 K 2023-07-16 19:15:44,053 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 83ms, sequenceid=26, compaction requested=false 2023-07-16 19:15:44,053 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-16 19:15:44,055 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:44,062 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-16 19:15:44,062 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 19:15:44,062 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 19:15:44,062 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 19:15:44,063 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-16 19:15:44,073 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:44,073 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:44,073 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:44385-0x1016f8fc6fa000b, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:44,073 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:44,073 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:44,073 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44385,1689534941662 2023-07-16 19:15:44,073 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:44,073 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:44,073 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:44385-0x1016f8fc6fa000b, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:44,074 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44385,1689534941662] 2023-07-16 19:15:44,074 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44385,1689534941662; numProcessing=1 2023-07-16 19:15:44,075 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44385,1689534941662 already deleted, retry=false 2023-07-16 19:15:44,075 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44385,1689534941662 expired; onlineServers=3 2023-07-16 19:15:44,169 INFO [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41397,1689534940338; all regions closed. 2023-07-16 19:15:44,169 INFO [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39113,1689534940238; all regions closed. 2023-07-16 19:15:44,170 INFO [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40351,1689534940288; all regions closed. 2023-07-16 19:15:44,175 DEBUG [RS:2;jenkins-hbase4:41397] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/oldWALs 2023-07-16 19:15:44,176 INFO [RS:2;jenkins-hbase4:41397] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41397%2C1689534940338:(num 1689534940975) 2023-07-16 19:15:44,176 DEBUG [RS:2;jenkins-hbase4:41397] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:44,176 INFO [RS:2;jenkins-hbase4:41397] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:44,176 INFO [RS:2;jenkins-hbase4:41397] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 19:15:44,176 INFO [RS:2;jenkins-hbase4:41397] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 19:15:44,176 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 19:15:44,176 INFO [RS:2;jenkins-hbase4:41397] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 19:15:44,176 INFO [RS:2;jenkins-hbase4:41397] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 19:15:44,176 DEBUG [RS:0;jenkins-hbase4:39113] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/oldWALs 2023-07-16 19:15:44,176 INFO [RS:0;jenkins-hbase4:39113] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39113%2C1689534940238:(num 1689534940963) 2023-07-16 19:15:44,176 DEBUG [RS:0;jenkins-hbase4:39113] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:44,177 INFO [RS:2;jenkins-hbase4:41397] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41397 2023-07-16 19:15:44,177 INFO [RS:0;jenkins-hbase4:39113] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:44,178 INFO [RS:0;jenkins-hbase4:39113] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 19:15:44,178 INFO [RS:0;jenkins-hbase4:39113] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 19:15:44,178 INFO [RS:0;jenkins-hbase4:39113] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 19:15:44,178 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 19:15:44,178 INFO [RS:0;jenkins-hbase4:39113] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 19:15:44,179 INFO [RS:0;jenkins-hbase4:39113] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39113 2023-07-16 19:15:44,180 DEBUG [RS:1;jenkins-hbase4:40351] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/oldWALs 2023-07-16 19:15:44,180 INFO [RS:1;jenkins-hbase4:40351] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40351%2C1689534940288.meta:.meta(num 1689534941089) 2023-07-16 19:15:44,180 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:44,180 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:44,181 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41397,1689534940338 2023-07-16 19:15:44,181 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:44,181 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:44,181 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:44,181 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39113,1689534940238 2023-07-16 19:15:44,183 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41397,1689534940338] 2023-07-16 19:15:44,183 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41397,1689534940338; numProcessing=2 2023-07-16 19:15:44,185 DEBUG [RS:1;jenkins-hbase4:40351] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/oldWALs 2023-07-16 19:15:44,185 INFO [RS:1;jenkins-hbase4:40351] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40351%2C1689534940288:(num 1689534940976) 2023-07-16 19:15:44,185 DEBUG [RS:1;jenkins-hbase4:40351] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:44,185 INFO [RS:1;jenkins-hbase4:40351] regionserver.LeaseManager(133): Closed leases 2023-07-16 19:15:44,185 INFO [RS:1;jenkins-hbase4:40351] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 19:15:44,185 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 19:15:44,186 INFO [RS:1;jenkins-hbase4:40351] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40351 2023-07-16 19:15:44,283 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:44,283 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:41397-0x1016f8fc6fa0003, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:44,283 INFO [RS:2;jenkins-hbase4:41397] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41397,1689534940338; zookeeper connection closed. 2023-07-16 19:15:44,284 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7c4610e6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7c4610e6 2023-07-16 19:15:44,284 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 19:15:44,284 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40351,1689534940288 2023-07-16 19:15:44,285 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41397,1689534940338 already deleted, retry=false 2023-07-16 19:15:44,285 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41397,1689534940338 expired; onlineServers=2 2023-07-16 19:15:44,285 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39113,1689534940238] 2023-07-16 19:15:44,285 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39113,1689534940238; numProcessing=3 2023-07-16 19:15:44,287 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39113,1689534940238 already deleted, retry=false 2023-07-16 19:15:44,288 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39113,1689534940238 expired; onlineServers=1 2023-07-16 19:15:44,288 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40351,1689534940288] 2023-07-16 19:15:44,288 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40351,1689534940288; numProcessing=4 2023-07-16 19:15:44,289 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40351,1689534940288 already deleted, retry=false 2023-07-16 19:15:44,289 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40351,1689534940288 expired; onlineServers=0 2023-07-16 19:15:44,289 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45041,1689534940170' ***** 2023-07-16 19:15:44,289 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-16 19:15:44,290 DEBUG [M:0;jenkins-hbase4:45041] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@13f2edde, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 19:15:44,290 INFO [M:0;jenkins-hbase4:45041] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 19:15:44,292 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-16 19:15:44,292 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 19:15:44,292 INFO [M:0;jenkins-hbase4:45041] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@105e13de{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-16 19:15:44,292 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 19:15:44,293 INFO [M:0;jenkins-hbase4:45041] server.AbstractConnector(383): Stopped ServerConnector@22366487{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:44,293 INFO [M:0;jenkins-hbase4:45041] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 19:15:44,293 INFO [M:0;jenkins-hbase4:45041] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@69823956{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 19:15:44,294 INFO [M:0;jenkins-hbase4:45041] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@633dccf4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/hadoop.log.dir/,STOPPED} 2023-07-16 19:15:44,294 INFO [M:0;jenkins-hbase4:45041] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45041,1689534940170 2023-07-16 19:15:44,294 INFO [M:0;jenkins-hbase4:45041] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45041,1689534940170; all regions closed. 2023-07-16 19:15:44,294 DEBUG [M:0;jenkins-hbase4:45041] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 19:15:44,295 INFO [M:0;jenkins-hbase4:45041] master.HMaster(1491): Stopping master jetty server 2023-07-16 19:15:44,295 INFO [M:0;jenkins-hbase4:45041] server.AbstractConnector(383): Stopped ServerConnector@5409c29{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 19:15:44,295 DEBUG [M:0;jenkins-hbase4:45041] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-16 19:15:44,295 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-16 19:15:44,296 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689534940667] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689534940667,5,FailOnTimeoutGroup] 2023-07-16 19:15:44,296 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689534940667] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689534940667,5,FailOnTimeoutGroup] 2023-07-16 19:15:44,296 DEBUG [M:0;jenkins-hbase4:45041] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-16 19:15:44,296 INFO [M:0;jenkins-hbase4:45041] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-16 19:15:44,296 INFO [M:0;jenkins-hbase4:45041] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-16 19:15:44,296 INFO [M:0;jenkins-hbase4:45041] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-16 19:15:44,296 DEBUG [M:0;jenkins-hbase4:45041] master.HMaster(1512): Stopping service threads 2023-07-16 19:15:44,296 INFO [M:0;jenkins-hbase4:45041] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-16 19:15:44,296 ERROR [M:0;jenkins-hbase4:45041] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-16 19:15:44,296 INFO [M:0;jenkins-hbase4:45041] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-16 19:15:44,296 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-16 19:15:44,297 DEBUG [M:0;jenkins-hbase4:45041] zookeeper.ZKUtil(398): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-16 19:15:44,297 WARN [M:0;jenkins-hbase4:45041] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-16 19:15:44,297 INFO [M:0;jenkins-hbase4:45041] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-16 19:15:44,297 INFO [M:0;jenkins-hbase4:45041] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-16 19:15:44,297 DEBUG [M:0;jenkins-hbase4:45041] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 19:15:44,297 INFO [M:0;jenkins-hbase4:45041] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:44,297 DEBUG [M:0;jenkins-hbase4:45041] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:44,297 DEBUG [M:0;jenkins-hbase4:45041] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 19:15:44,297 DEBUG [M:0;jenkins-hbase4:45041] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:44,297 INFO [M:0;jenkins-hbase4:45041] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.21 KB heapSize=90.66 KB 2023-07-16 19:15:44,308 INFO [M:0;jenkins-hbase4:45041] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.21 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c9b245b4b4ae40edb2538a556712eb87 2023-07-16 19:15:44,313 DEBUG [M:0;jenkins-hbase4:45041] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/c9b245b4b4ae40edb2538a556712eb87 as hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c9b245b4b4ae40edb2538a556712eb87 2023-07-16 19:15:44,318 INFO [M:0;jenkins-hbase4:45041] regionserver.HStore(1080): Added hdfs://localhost:36299/user/jenkins/test-data/b1ac325d-c935-faa0-419a-0926308e1991/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/c9b245b4b4ae40edb2538a556712eb87, entries=22, sequenceid=175, filesize=11.1 K 2023-07-16 19:15:44,319 INFO [M:0;jenkins-hbase4:45041] regionserver.HRegion(2948): Finished flush of dataSize ~76.21 KB/78042, heapSize ~90.64 KB/92816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 22ms, sequenceid=175, compaction requested=false 2023-07-16 19:15:44,320 INFO [M:0;jenkins-hbase4:45041] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 19:15:44,320 DEBUG [M:0;jenkins-hbase4:45041] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 19:15:44,325 INFO [M:0;jenkins-hbase4:45041] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-16 19:15:44,325 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 19:15:44,325 INFO [M:0;jenkins-hbase4:45041] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45041 2023-07-16 19:15:44,327 DEBUG [M:0;jenkins-hbase4:45041] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,45041,1689534940170 already deleted, retry=false 2023-07-16 19:15:44,551 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:44,552 INFO [M:0;jenkins-hbase4:45041] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45041,1689534940170; zookeeper connection closed. 2023-07-16 19:15:44,552 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): master:45041-0x1016f8fc6fa0000, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:44,652 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:44,652 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:40351-0x1016f8fc6fa0002, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:44,652 INFO [RS:1;jenkins-hbase4:40351] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40351,1689534940288; zookeeper connection closed. 2023-07-16 19:15:44,652 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@399b29ea] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@399b29ea 2023-07-16 19:15:44,752 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:44,752 INFO [RS:0;jenkins-hbase4:39113] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39113,1689534940238; zookeeper connection closed. 2023-07-16 19:15:44,752 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:39113-0x1016f8fc6fa0001, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:44,752 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@dd24819] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@dd24819 2023-07-16 19:15:44,852 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:44385-0x1016f8fc6fa000b, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:44,852 INFO [RS:3;jenkins-hbase4:44385] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44385,1689534941662; zookeeper connection closed. 2023-07-16 19:15:44,852 DEBUG [Listener at localhost/42605-EventThread] zookeeper.ZKWatcher(600): regionserver:44385-0x1016f8fc6fa000b, quorum=127.0.0.1:62260, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 19:15:44,853 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@42065255] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@42065255 2023-07-16 19:15:44,853 INFO [Listener at localhost/42605] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-16 19:15:44,853 WARN [Listener at localhost/42605] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 19:15:44,857 INFO [Listener at localhost/42605] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 19:15:44,960 WARN [BP-620611439-172.31.14.131-1689534939350 heartbeating to localhost/127.0.0.1:36299] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 19:15:44,960 WARN [BP-620611439-172.31.14.131-1689534939350 heartbeating to localhost/127.0.0.1:36299] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-620611439-172.31.14.131-1689534939350 (Datanode Uuid cab3eae8-76a8-4096-9631-fc7d8996bcf7) service to localhost/127.0.0.1:36299 2023-07-16 19:15:44,960 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data5/current/BP-620611439-172.31.14.131-1689534939350] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:44,961 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data6/current/BP-620611439-172.31.14.131-1689534939350] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:44,961 WARN [Listener at localhost/42605] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 19:15:44,965 INFO [Listener at localhost/42605] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 19:15:45,067 WARN [BP-620611439-172.31.14.131-1689534939350 heartbeating to localhost/127.0.0.1:36299] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 19:15:45,067 WARN [BP-620611439-172.31.14.131-1689534939350 heartbeating to localhost/127.0.0.1:36299] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-620611439-172.31.14.131-1689534939350 (Datanode Uuid 15e992a8-d227-450b-89fe-697c5b720150) service to localhost/127.0.0.1:36299 2023-07-16 19:15:45,068 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data3/current/BP-620611439-172.31.14.131-1689534939350] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:45,068 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data4/current/BP-620611439-172.31.14.131-1689534939350] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:45,069 WARN [Listener at localhost/42605] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 19:15:45,071 INFO [Listener at localhost/42605] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 19:15:45,174 WARN [BP-620611439-172.31.14.131-1689534939350 heartbeating to localhost/127.0.0.1:36299] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 19:15:45,174 WARN [BP-620611439-172.31.14.131-1689534939350 heartbeating to localhost/127.0.0.1:36299] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-620611439-172.31.14.131-1689534939350 (Datanode Uuid 93834aaa-932d-4a3e-ac81-4ade5aaa7064) service to localhost/127.0.0.1:36299 2023-07-16 19:15:45,175 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data1/current/BP-620611439-172.31.14.131-1689534939350] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:45,176 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8b1f4b71-05b3-ed7e-2405-aff1ebabb834/cluster_8d1507d2-47f2-7653-83b0-a4ce006e8c6a/dfs/data/data2/current/BP-620611439-172.31.14.131-1689534939350] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 19:15:45,187 INFO [Listener at localhost/42605] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 19:15:45,207 INFO [Listener at localhost/42605] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-16 19:15:45,236 INFO [Listener at localhost/42605] hbase.HBaseTestingUtility(1293): Minicluster is down