2023-07-18 07:14:53,258 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29 2023-07-18 07:14:53,279 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-18 07:14:53,300 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 07:14:53,300 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/cluster_358b183c-ed7f-22d8-bc16-6d714b54baf2, deleteOnExit=true 2023-07-18 07:14:53,300 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 07:14:53,301 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/test.cache.data in system properties and HBase conf 2023-07-18 07:14:53,302 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 07:14:53,302 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/hadoop.log.dir in system properties and HBase conf 2023-07-18 07:14:53,302 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 07:14:53,303 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 07:14:53,303 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 07:14:53,425 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-18 07:14:53,913 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 07:14:53,918 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 07:14:53,918 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 07:14:53,919 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 07:14:53,919 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 07:14:53,919 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 07:14:53,920 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 07:14:53,920 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 07:14:53,920 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 07:14:53,921 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 07:14:53,921 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/nfs.dump.dir in system properties and HBase conf 2023-07-18 07:14:53,922 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/java.io.tmpdir in system properties and HBase conf 2023-07-18 07:14:53,922 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 07:14:53,922 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 07:14:53,923 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 07:14:54,414 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 07:14:54,419 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 07:14:54,773 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-18 07:14:54,974 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-18 07:14:54,989 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 07:14:55,030 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 07:14:55,068 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/java.io.tmpdir/Jetty_localhost_43231_hdfs____mwsu6z/webapp 2023-07-18 07:14:55,222 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43231 2023-07-18 07:14:55,235 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 07:14:55,235 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 07:14:55,779 WARN [Listener at localhost/42711] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 07:14:55,851 WARN [Listener at localhost/42711] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 07:14:55,871 WARN [Listener at localhost/42711] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 07:14:55,932 INFO [Listener at localhost/42711] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 07:14:55,941 INFO [Listener at localhost/42711] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/java.io.tmpdir/Jetty_localhost_43191_datanode____3o5zpt/webapp 2023-07-18 07:14:56,048 INFO [Listener at localhost/42711] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43191 2023-07-18 07:14:56,485 WARN [Listener at localhost/36621] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 07:14:56,539 WARN [Listener at localhost/36621] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 07:14:56,543 WARN [Listener at localhost/36621] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 07:14:56,545 INFO [Listener at localhost/36621] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 07:14:56,552 INFO [Listener at localhost/36621] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/java.io.tmpdir/Jetty_localhost_40945_datanode____.x9pbx7/webapp 2023-07-18 07:14:56,661 INFO [Listener at localhost/36621] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40945 2023-07-18 07:14:56,672 WARN [Listener at localhost/36225] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 07:14:56,697 WARN [Listener at localhost/36225] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 07:14:56,700 WARN [Listener at localhost/36225] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 07:14:56,702 INFO [Listener at localhost/36225] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 07:14:56,708 INFO [Listener at localhost/36225] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/java.io.tmpdir/Jetty_localhost_45269_datanode____.20oibp/webapp 2023-07-18 07:14:56,833 INFO [Listener at localhost/36225] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45269 2023-07-18 07:14:56,853 WARN [Listener at localhost/33473] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 07:14:57,240 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3323cca264ac58eb: Processing first storage report for DS-fba91313-98da-49e7-aca0-d80120d5cf8c from datanode 1325a98d-4e2c-49c8-aeb4-3983ab019f05 2023-07-18 07:14:57,242 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3323cca264ac58eb: from storage DS-fba91313-98da-49e7-aca0-d80120d5cf8c node DatanodeRegistration(127.0.0.1:35973, datanodeUuid=1325a98d-4e2c-49c8-aeb4-3983ab019f05, infoPort=39829, infoSecurePort=0, ipcPort=36621, storageInfo=lv=-57;cid=testClusterID;nsid=404566005;c=1689664494510), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-18 07:14:57,242 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x68834ba8d5f6a4b7: Processing first storage report for DS-8a96df73-714a-4f6f-97cc-cc27b08692c2 from datanode 779d76c0-b477-40b2-aa4b-0597835d3a42 2023-07-18 07:14:57,243 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x68834ba8d5f6a4b7: from storage DS-8a96df73-714a-4f6f-97cc-cc27b08692c2 node DatanodeRegistration(127.0.0.1:35935, datanodeUuid=779d76c0-b477-40b2-aa4b-0597835d3a42, infoPort=41335, infoSecurePort=0, ipcPort=36225, storageInfo=lv=-57;cid=testClusterID;nsid=404566005;c=1689664494510), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-18 07:14:57,243 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3323cca264ac58eb: Processing first storage report for DS-17d44294-91ef-44e1-8e60-807cefee19bb from datanode 1325a98d-4e2c-49c8-aeb4-3983ab019f05 2023-07-18 07:14:57,243 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3323cca264ac58eb: from storage DS-17d44294-91ef-44e1-8e60-807cefee19bb node DatanodeRegistration(127.0.0.1:35973, datanodeUuid=1325a98d-4e2c-49c8-aeb4-3983ab019f05, infoPort=39829, infoSecurePort=0, ipcPort=36621, storageInfo=lv=-57;cid=testClusterID;nsid=404566005;c=1689664494510), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 07:14:57,243 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x68834ba8d5f6a4b7: Processing first storage report for DS-def70f47-b892-451e-b490-ee99eb9a948f from datanode 779d76c0-b477-40b2-aa4b-0597835d3a42 2023-07-18 07:14:57,243 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x68834ba8d5f6a4b7: from storage DS-def70f47-b892-451e-b490-ee99eb9a948f node DatanodeRegistration(127.0.0.1:35935, datanodeUuid=779d76c0-b477-40b2-aa4b-0597835d3a42, infoPort=41335, infoSecurePort=0, ipcPort=36225, storageInfo=lv=-57;cid=testClusterID;nsid=404566005;c=1689664494510), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 07:14:57,244 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa2774cc47ef4768a: Processing first storage report for DS-5c4b7b43-3300-44cc-aa7f-e40b05091082 from datanode 6fea3d87-1566-4324-bbdb-4df95f4bb34d 2023-07-18 07:14:57,244 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa2774cc47ef4768a: from storage DS-5c4b7b43-3300-44cc-aa7f-e40b05091082 node DatanodeRegistration(127.0.0.1:44391, datanodeUuid=6fea3d87-1566-4324-bbdb-4df95f4bb34d, infoPort=42927, infoSecurePort=0, ipcPort=33473, storageInfo=lv=-57;cid=testClusterID;nsid=404566005;c=1689664494510), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 07:14:57,247 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa2774cc47ef4768a: Processing first storage report for DS-2173de79-cf2e-4607-b470-1b2f39ca3416 from datanode 6fea3d87-1566-4324-bbdb-4df95f4bb34d 2023-07-18 07:14:57,247 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa2774cc47ef4768a: from storage DS-2173de79-cf2e-4607-b470-1b2f39ca3416 node DatanodeRegistration(127.0.0.1:44391, datanodeUuid=6fea3d87-1566-4324-bbdb-4df95f4bb34d, infoPort=42927, infoSecurePort=0, ipcPort=33473, storageInfo=lv=-57;cid=testClusterID;nsid=404566005;c=1689664494510), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 07:14:57,426 DEBUG [Listener at localhost/33473] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29 2023-07-18 07:14:57,527 INFO [Listener at localhost/33473] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/cluster_358b183c-ed7f-22d8-bc16-6d714b54baf2/zookeeper_0, clientPort=57245, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/cluster_358b183c-ed7f-22d8-bc16-6d714b54baf2/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/cluster_358b183c-ed7f-22d8-bc16-6d714b54baf2/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 07:14:57,547 INFO [Listener at localhost/33473] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57245 2023-07-18 07:14:57,558 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:14:57,561 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:14:58,312 INFO [Listener at localhost/33473] util.FSUtils(471): Created version file at hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9 with version=8 2023-07-18 07:14:58,312 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/hbase-staging 2023-07-18 07:14:58,324 DEBUG [Listener at localhost/33473] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 07:14:58,324 DEBUG [Listener at localhost/33473] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 07:14:58,324 DEBUG [Listener at localhost/33473] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 07:14:58,324 DEBUG [Listener at localhost/33473] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 07:14:58,786 INFO [Listener at localhost/33473] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-18 07:14:59,583 INFO [Listener at localhost/33473] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 07:14:59,653 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:14:59,654 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 07:14:59,654 INFO [Listener at localhost/33473] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 07:14:59,654 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:14:59,655 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 07:14:59,948 INFO [Listener at localhost/33473] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 07:15:00,090 DEBUG [Listener at localhost/33473] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-18 07:15:00,217 INFO [Listener at localhost/33473] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33141 2023-07-18 07:15:00,232 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:00,234 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:00,262 INFO [Listener at localhost/33473] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33141 connecting to ZooKeeper ensemble=127.0.0.1:57245 2023-07-18 07:15:00,320 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:331410x0, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:00,330 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33141-0x1017748aa600000 connected 2023-07-18 07:15:00,365 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 07:15:00,366 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:00,369 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 07:15:00,381 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33141 2023-07-18 07:15:00,382 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33141 2023-07-18 07:15:00,382 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33141 2023-07-18 07:15:00,383 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33141 2023-07-18 07:15:00,383 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33141 2023-07-18 07:15:00,421 INFO [Listener at localhost/33473] log.Log(170): Logging initialized @7989ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-18 07:15:00,587 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 07:15:00,588 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 07:15:00,588 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 07:15:00,590 INFO [Listener at localhost/33473] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 07:15:00,590 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 07:15:00,591 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 07:15:00,594 INFO [Listener at localhost/33473] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 07:15:00,675 INFO [Listener at localhost/33473] http.HttpServer(1146): Jetty bound to port 41961 2023-07-18 07:15:00,678 INFO [Listener at localhost/33473] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:00,721 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:00,726 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@660b604c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/hadoop.log.dir/,AVAILABLE} 2023-07-18 07:15:00,726 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:00,727 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1b664373{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 07:15:00,810 INFO [Listener at localhost/33473] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 07:15:00,830 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 07:15:00,830 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 07:15:00,834 INFO [Listener at localhost/33473] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 07:15:00,849 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:00,886 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@64d6f665{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-18 07:15:00,902 INFO [Listener at localhost/33473] server.AbstractConnector(333): Started ServerConnector@16cb9a38{HTTP/1.1, (http/1.1)}{0.0.0.0:41961} 2023-07-18 07:15:00,902 INFO [Listener at localhost/33473] server.Server(415): Started @8470ms 2023-07-18 07:15:00,907 INFO [Listener at localhost/33473] master.HMaster(444): hbase.rootdir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9, hbase.cluster.distributed=false 2023-07-18 07:15:01,014 INFO [Listener at localhost/33473] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 07:15:01,015 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:01,015 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:01,015 INFO [Listener at localhost/33473] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 07:15:01,015 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:01,015 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 07:15:01,024 INFO [Listener at localhost/33473] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 07:15:01,029 INFO [Listener at localhost/33473] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41293 2023-07-18 07:15:01,032 INFO [Listener at localhost/33473] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 07:15:01,046 DEBUG [Listener at localhost/33473] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 07:15:01,048 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:01,051 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:01,053 INFO [Listener at localhost/33473] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41293 connecting to ZooKeeper ensemble=127.0.0.1:57245 2023-07-18 07:15:01,062 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:412930x0, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:01,068 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:412930x0, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 07:15:01,072 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:412930x0, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:01,072 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41293-0x1017748aa600001 connected 2023-07-18 07:15:01,073 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 07:15:01,083 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41293 2023-07-18 07:15:01,083 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41293 2023-07-18 07:15:01,087 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41293 2023-07-18 07:15:01,094 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41293 2023-07-18 07:15:01,095 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41293 2023-07-18 07:15:01,098 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 07:15:01,098 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 07:15:01,098 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 07:15:01,100 INFO [Listener at localhost/33473] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 07:15:01,100 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 07:15:01,100 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 07:15:01,100 INFO [Listener at localhost/33473] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 07:15:01,103 INFO [Listener at localhost/33473] http.HttpServer(1146): Jetty bound to port 37451 2023-07-18 07:15:01,103 INFO [Listener at localhost/33473] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:01,111 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:01,111 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1d1f391{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/hadoop.log.dir/,AVAILABLE} 2023-07-18 07:15:01,112 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:01,112 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d349d91{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 07:15:01,128 INFO [Listener at localhost/33473] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 07:15:01,131 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 07:15:01,131 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 07:15:01,132 INFO [Listener at localhost/33473] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 07:15:01,133 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:01,139 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7c205c50{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:01,141 INFO [Listener at localhost/33473] server.AbstractConnector(333): Started ServerConnector@adfe6ba{HTTP/1.1, (http/1.1)}{0.0.0.0:37451} 2023-07-18 07:15:01,141 INFO [Listener at localhost/33473] server.Server(415): Started @8709ms 2023-07-18 07:15:01,155 INFO [Listener at localhost/33473] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 07:15:01,156 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:01,156 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:01,156 INFO [Listener at localhost/33473] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 07:15:01,156 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:01,156 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 07:15:01,157 INFO [Listener at localhost/33473] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 07:15:01,158 INFO [Listener at localhost/33473] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33769 2023-07-18 07:15:01,159 INFO [Listener at localhost/33473] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 07:15:01,162 DEBUG [Listener at localhost/33473] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 07:15:01,163 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:01,165 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:01,167 INFO [Listener at localhost/33473] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33769 connecting to ZooKeeper ensemble=127.0.0.1:57245 2023-07-18 07:15:01,174 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:337690x0, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:01,175 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:337690x0, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 07:15:01,176 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:337690x0, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:01,177 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:337690x0, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 07:15:01,178 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33769-0x1017748aa600002 connected 2023-07-18 07:15:01,179 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33769 2023-07-18 07:15:01,179 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33769 2023-07-18 07:15:01,183 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33769 2023-07-18 07:15:01,183 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33769 2023-07-18 07:15:01,183 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33769 2023-07-18 07:15:01,187 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 07:15:01,187 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 07:15:01,187 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 07:15:01,188 INFO [Listener at localhost/33473] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 07:15:01,189 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 07:15:01,189 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 07:15:01,189 INFO [Listener at localhost/33473] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 07:15:01,190 INFO [Listener at localhost/33473] http.HttpServer(1146): Jetty bound to port 40763 2023-07-18 07:15:01,190 INFO [Listener at localhost/33473] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:01,193 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:01,193 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6462cf1b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/hadoop.log.dir/,AVAILABLE} 2023-07-18 07:15:01,193 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:01,194 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1dbceb61{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 07:15:01,205 INFO [Listener at localhost/33473] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 07:15:01,206 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 07:15:01,206 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 07:15:01,207 INFO [Listener at localhost/33473] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 07:15:01,207 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:01,208 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@655a6898{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:01,209 INFO [Listener at localhost/33473] server.AbstractConnector(333): Started ServerConnector@8c593f2{HTTP/1.1, (http/1.1)}{0.0.0.0:40763} 2023-07-18 07:15:01,209 INFO [Listener at localhost/33473] server.Server(415): Started @8777ms 2023-07-18 07:15:01,221 INFO [Listener at localhost/33473] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 07:15:01,222 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:01,222 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:01,222 INFO [Listener at localhost/33473] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 07:15:01,222 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:01,222 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 07:15:01,222 INFO [Listener at localhost/33473] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 07:15:01,224 INFO [Listener at localhost/33473] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39465 2023-07-18 07:15:01,224 INFO [Listener at localhost/33473] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 07:15:01,225 DEBUG [Listener at localhost/33473] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 07:15:01,227 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:01,229 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:01,230 INFO [Listener at localhost/33473] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39465 connecting to ZooKeeper ensemble=127.0.0.1:57245 2023-07-18 07:15:01,235 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:394650x0, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:01,236 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39465-0x1017748aa600003 connected 2023-07-18 07:15:01,236 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 07:15:01,237 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:01,238 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 07:15:01,238 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39465 2023-07-18 07:15:01,238 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39465 2023-07-18 07:15:01,239 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39465 2023-07-18 07:15:01,239 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39465 2023-07-18 07:15:01,239 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39465 2023-07-18 07:15:01,242 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 07:15:01,242 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 07:15:01,242 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 07:15:01,243 INFO [Listener at localhost/33473] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 07:15:01,243 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 07:15:01,243 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 07:15:01,243 INFO [Listener at localhost/33473] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 07:15:01,244 INFO [Listener at localhost/33473] http.HttpServer(1146): Jetty bound to port 38737 2023-07-18 07:15:01,244 INFO [Listener at localhost/33473] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:01,246 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:01,247 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@54df227c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/hadoop.log.dir/,AVAILABLE} 2023-07-18 07:15:01,247 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:01,247 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@40731eac{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 07:15:01,255 INFO [Listener at localhost/33473] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 07:15:01,255 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 07:15:01,256 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 07:15:01,256 INFO [Listener at localhost/33473] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 07:15:01,258 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:01,258 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6efb77a4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:01,259 INFO [Listener at localhost/33473] server.AbstractConnector(333): Started ServerConnector@1f112788{HTTP/1.1, (http/1.1)}{0.0.0.0:38737} 2023-07-18 07:15:01,260 INFO [Listener at localhost/33473] server.Server(415): Started @8828ms 2023-07-18 07:15:01,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:01,269 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@7a1a3238{HTTP/1.1, (http/1.1)}{0.0.0.0:36121} 2023-07-18 07:15:01,269 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8837ms 2023-07-18 07:15:01,269 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,33141,1689664498534 2023-07-18 07:15:01,280 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 07:15:01,282 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,33141,1689664498534 2023-07-18 07:15:01,306 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 07:15:01,306 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 07:15:01,306 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 07:15:01,306 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 07:15:01,307 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:01,309 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 07:15:01,311 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,33141,1689664498534 from backup master directory 2023-07-18 07:15:01,311 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 07:15:01,315 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,33141,1689664498534 2023-07-18 07:15:01,315 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 07:15:01,316 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 07:15:01,316 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,33141,1689664498534 2023-07-18 07:15:01,319 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-18 07:15:01,322 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-18 07:15:01,426 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/hbase.id with ID: 456d955b-b6e4-4117-84e7-1f3d706ecbe3 2023-07-18 07:15:01,477 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:01,497 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:01,561 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x3e02c867 to 127.0.0.1:57245 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:01,588 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@251c575f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:01,614 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:01,616 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 07:15:01,637 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-18 07:15:01,637 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-18 07:15:01,639 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-18 07:15:01,644 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-18 07:15:01,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:01,686 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/MasterData/data/master/store-tmp 2023-07-18 07:15:01,729 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:01,729 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 07:15:01,729 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:01,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:01,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 07:15:01,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:01,730 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:01,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 07:15:01,732 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/MasterData/WALs/jenkins-hbase4.apache.org,33141,1689664498534 2023-07-18 07:15:01,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33141%2C1689664498534, suffix=, logDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/MasterData/WALs/jenkins-hbase4.apache.org,33141,1689664498534, archiveDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/MasterData/oldWALs, maxLogs=10 2023-07-18 07:15:01,828 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44391,DS-5c4b7b43-3300-44cc-aa7f-e40b05091082,DISK] 2023-07-18 07:15:01,828 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35935,DS-8a96df73-714a-4f6f-97cc-cc27b08692c2,DISK] 2023-07-18 07:15:01,828 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35973,DS-fba91313-98da-49e7-aca0-d80120d5cf8c,DISK] 2023-07-18 07:15:01,838 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-18 07:15:01,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/MasterData/WALs/jenkins-hbase4.apache.org,33141,1689664498534/jenkins-hbase4.apache.org%2C33141%2C1689664498534.1689664501767 2023-07-18 07:15:01,913 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35973,DS-fba91313-98da-49e7-aca0-d80120d5cf8c,DISK], DatanodeInfoWithStorage[127.0.0.1:35935,DS-8a96df73-714a-4f6f-97cc-cc27b08692c2,DISK], DatanodeInfoWithStorage[127.0.0.1:44391,DS-5c4b7b43-3300-44cc-aa7f-e40b05091082,DISK]] 2023-07-18 07:15:01,914 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:01,915 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:01,920 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:01,921 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:02,012 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:02,020 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 07:15:02,057 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 07:15:02,073 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:02,079 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:02,081 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:02,104 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:02,116 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:02,117 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11674839360, jitterRate=0.08730414509773254}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:02,117 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 07:15:02,123 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 07:15:02,154 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 07:15:02,154 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 07:15:02,158 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 07:15:02,160 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-18 07:15:02,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 48 msec 2023-07-18 07:15:02,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 07:15:02,241 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 07:15:02,248 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 07:15:02,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 07:15:02,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 07:15:02,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 07:15:02,279 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:02,289 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 07:15:02,289 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 07:15:02,311 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 07:15:02,319 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:02,319 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:02,319 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:02,319 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:02,320 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:02,321 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,33141,1689664498534, sessionid=0x1017748aa600000, setting cluster-up flag (Was=false) 2023-07-18 07:15:02,347 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:02,356 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 07:15:02,358 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33141,1689664498534 2023-07-18 07:15:02,364 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:02,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 07:15:02,373 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33141,1689664498534 2023-07-18 07:15:02,375 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.hbase-snapshot/.tmp 2023-07-18 07:15:02,456 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 07:15:02,471 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 07:15:02,475 INFO [RS:1;jenkins-hbase4:33769] regionserver.HRegionServer(951): ClusterId : 456d955b-b6e4-4117-84e7-1f3d706ecbe3 2023-07-18 07:15:02,476 INFO [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer(951): ClusterId : 456d955b-b6e4-4117-84e7-1f3d706ecbe3 2023-07-18 07:15:02,479 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 07:15:02,479 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 07:15:02,475 INFO [RS:2;jenkins-hbase4:39465] regionserver.HRegionServer(951): ClusterId : 456d955b-b6e4-4117-84e7-1f3d706ecbe3 2023-07-18 07:15:02,481 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33141,1689664498534] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 07:15:02,489 DEBUG [RS:0;jenkins-hbase4:41293] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 07:15:02,490 DEBUG [RS:1;jenkins-hbase4:33769] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 07:15:02,490 DEBUG [RS:2;jenkins-hbase4:39465] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 07:15:02,497 DEBUG [RS:1;jenkins-hbase4:33769] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 07:15:02,497 DEBUG [RS:0;jenkins-hbase4:41293] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 07:15:02,497 DEBUG [RS:2;jenkins-hbase4:39465] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 07:15:02,497 DEBUG [RS:0;jenkins-hbase4:41293] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 07:15:02,497 DEBUG [RS:1;jenkins-hbase4:33769] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 07:15:02,497 DEBUG [RS:2;jenkins-hbase4:39465] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 07:15:02,507 DEBUG [RS:0;jenkins-hbase4:41293] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 07:15:02,507 DEBUG [RS:2;jenkins-hbase4:39465] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 07:15:02,507 DEBUG [RS:1;jenkins-hbase4:33769] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 07:15:02,519 DEBUG [RS:1;jenkins-hbase4:33769] zookeeper.ReadOnlyZKClient(139): Connect 0x6a1d1a7c to 127.0.0.1:57245 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:02,520 DEBUG [RS:2;jenkins-hbase4:39465] zookeeper.ReadOnlyZKClient(139): Connect 0x4096f316 to 127.0.0.1:57245 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:02,520 DEBUG [RS:0;jenkins-hbase4:41293] zookeeper.ReadOnlyZKClient(139): Connect 0x6fc4ff21 to 127.0.0.1:57245 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:02,587 DEBUG [RS:2;jenkins-hbase4:39465] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4e510574, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:02,589 DEBUG [RS:2;jenkins-hbase4:39465] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4a6e9866, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 07:15:02,605 DEBUG [RS:1;jenkins-hbase4:33769] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@64dae94f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:02,605 DEBUG [RS:1;jenkins-hbase4:33769] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@60e5710c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 07:15:02,608 DEBUG [RS:0;jenkins-hbase4:41293] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61031ca9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:02,608 DEBUG [RS:0;jenkins-hbase4:41293] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@203e1692, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 07:15:02,629 DEBUG [RS:0;jenkins-hbase4:41293] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:41293 2023-07-18 07:15:02,635 INFO [RS:0;jenkins-hbase4:41293] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 07:15:02,636 INFO [RS:0;jenkins-hbase4:41293] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 07:15:02,640 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 07:15:02,636 DEBUG [RS:1;jenkins-hbase4:33769] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:33769 2023-07-18 07:15:02,651 DEBUG [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 07:15:02,651 INFO [RS:1;jenkins-hbase4:33769] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 07:15:02,653 INFO [RS:1;jenkins-hbase4:33769] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 07:15:02,653 DEBUG [RS:1;jenkins-hbase4:33769] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 07:15:02,657 INFO [RS:1;jenkins-hbase4:33769] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33141,1689664498534 with isa=jenkins-hbase4.apache.org/172.31.14.131:33769, startcode=1689664501155 2023-07-18 07:15:02,657 INFO [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33141,1689664498534 with isa=jenkins-hbase4.apache.org/172.31.14.131:41293, startcode=1689664501013 2023-07-18 07:15:02,661 DEBUG [RS:2;jenkins-hbase4:39465] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:39465 2023-07-18 07:15:02,661 INFO [RS:2;jenkins-hbase4:39465] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 07:15:02,662 INFO [RS:2;jenkins-hbase4:39465] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 07:15:02,662 DEBUG [RS:2;jenkins-hbase4:39465] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 07:15:02,663 INFO [RS:2;jenkins-hbase4:39465] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33141,1689664498534 with isa=jenkins-hbase4.apache.org/172.31.14.131:39465, startcode=1689664501221 2023-07-18 07:15:02,690 DEBUG [RS:0;jenkins-hbase4:41293] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 07:15:02,691 DEBUG [RS:1;jenkins-hbase4:33769] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 07:15:02,690 DEBUG [RS:2;jenkins-hbase4:39465] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 07:15:02,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 07:15:02,725 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 07:15:02,727 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 07:15:02,727 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 07:15:02,729 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 07:15:02,729 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 07:15:02,729 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 07:15:02,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 07:15:02,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 07:15:02,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:02,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 07:15:02,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:02,743 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689664532743 2023-07-18 07:15:02,746 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 07:15:02,753 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 07:15:02,759 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 07:15:02,764 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 07:15:02,769 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:02,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 07:15:02,790 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 07:15:02,790 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 07:15:02,790 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 07:15:02,791 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38163, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 07:15:02,791 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48493, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 07:15:02,802 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:02,793 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44809, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 07:15:02,821 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 07:15:02,824 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 07:15:02,824 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 07:15:02,827 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:02,828 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 07:15:02,829 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 07:15:02,855 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689664502854,5,FailOnTimeoutGroup] 2023-07-18 07:15:02,859 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689664502855,5,FailOnTimeoutGroup] 2023-07-18 07:15:02,860 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:02,860 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 07:15:02,868 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:02,868 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:02,869 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:02,899 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33141] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:02,900 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33141,1689664498534] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 07:15:02,904 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33141,1689664498534] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 07:15:02,917 DEBUG [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9 2023-07-18 07:15:02,917 DEBUG [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42711 2023-07-18 07:15:02,917 DEBUG [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41961 2023-07-18 07:15:02,919 DEBUG [RS:2;jenkins-hbase4:39465] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 07:15:02,919 WARN [RS:2;jenkins-hbase4:39465] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 07:15:02,923 DEBUG [RS:1;jenkins-hbase4:33769] regionserver.HRegionServer(2830): Master is not running yet 2023-07-18 07:15:02,924 WARN [RS:1;jenkins-hbase4:33769] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-18 07:15:02,928 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:02,929 DEBUG [RS:0;jenkins-hbase4:41293] zookeeper.ZKUtil(162): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:02,929 WARN [RS:0;jenkins-hbase4:41293] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 07:15:02,931 INFO [RS:0;jenkins-hbase4:41293] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:02,931 DEBUG [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/WALs/jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:02,934 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41293,1689664501013] 2023-07-18 07:15:02,942 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:02,943 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:02,943 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9 2023-07-18 07:15:02,947 DEBUG [RS:0;jenkins-hbase4:41293] zookeeper.ZKUtil(162): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:02,972 DEBUG [RS:0;jenkins-hbase4:41293] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 07:15:02,976 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:02,979 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 07:15:02,981 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/info 2023-07-18 07:15:02,982 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 07:15:02,983 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:02,983 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 07:15:02,986 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/rep_barrier 2023-07-18 07:15:02,986 INFO [RS:0;jenkins-hbase4:41293] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 07:15:02,987 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 07:15:02,988 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:02,988 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 07:15:02,990 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/table 2023-07-18 07:15:02,991 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 07:15:02,992 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:02,993 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740 2023-07-18 07:15:02,994 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740 2023-07-18 07:15:02,998 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 07:15:03,000 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 07:15:03,003 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:03,005 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9602909760, jitterRate=-0.10565933585166931}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 07:15:03,005 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 07:15:03,005 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 07:15:03,005 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 07:15:03,005 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 07:15:03,005 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 07:15:03,005 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 07:15:03,007 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 07:15:03,007 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 07:15:03,014 INFO [RS:0;jenkins-hbase4:41293] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 07:15:03,016 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 07:15:03,016 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 07:15:03,021 INFO [RS:2;jenkins-hbase4:39465] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33141,1689664498534 with isa=jenkins-hbase4.apache.org/172.31.14.131:39465, startcode=1689664501221 2023-07-18 07:15:03,022 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33141] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:03,022 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33141,1689664498534] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 07:15:03,022 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33141,1689664498534] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 07:15:03,023 INFO [RS:0;jenkins-hbase4:41293] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 07:15:03,024 DEBUG [RS:2;jenkins-hbase4:39465] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9 2023-07-18 07:15:03,024 INFO [RS:0;jenkins-hbase4:41293] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,024 DEBUG [RS:2;jenkins-hbase4:39465] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42711 2023-07-18 07:15:03,024 DEBUG [RS:2;jenkins-hbase4:39465] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41961 2023-07-18 07:15:03,024 INFO [RS:1;jenkins-hbase4:33769] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33141,1689664498534 with isa=jenkins-hbase4.apache.org/172.31.14.131:33769, startcode=1689664501155 2023-07-18 07:15:03,025 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33141] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:03,025 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33141,1689664498534] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 07:15:03,026 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33141,1689664498534] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 07:15:03,027 INFO [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 07:15:03,028 DEBUG [RS:1;jenkins-hbase4:33769] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9 2023-07-18 07:15:03,028 DEBUG [RS:1;jenkins-hbase4:33769] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42711 2023-07-18 07:15:03,028 DEBUG [RS:1;jenkins-hbase4:33769] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41961 2023-07-18 07:15:03,029 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:03,029 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:03,030 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 07:15:03,030 DEBUG [RS:2;jenkins-hbase4:39465] zookeeper.ZKUtil(162): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:03,030 WARN [RS:2;jenkins-hbase4:39465] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 07:15:03,031 INFO [RS:2;jenkins-hbase4:39465] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:03,031 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39465,1689664501221] 2023-07-18 07:15:03,032 DEBUG [RS:2;jenkins-hbase4:39465] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/WALs/jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:03,032 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33769,1689664501155] 2023-07-18 07:15:03,032 DEBUG [RS:1;jenkins-hbase4:33769] zookeeper.ZKUtil(162): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:03,032 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:03,032 WARN [RS:1;jenkins-hbase4:33769] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 07:15:03,032 INFO [RS:1;jenkins-hbase4:33769] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:03,032 DEBUG [RS:1;jenkins-hbase4:33769] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/WALs/jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:03,033 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:03,034 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:03,040 DEBUG [RS:2;jenkins-hbase4:39465] zookeeper.ZKUtil(162): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:03,040 DEBUG [RS:1;jenkins-hbase4:33769] zookeeper.ZKUtil(162): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:03,040 DEBUG [RS:2;jenkins-hbase4:39465] zookeeper.ZKUtil(162): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:03,040 DEBUG [RS:1;jenkins-hbase4:33769] zookeeper.ZKUtil(162): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:03,041 DEBUG [RS:2;jenkins-hbase4:39465] zookeeper.ZKUtil(162): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:03,041 INFO [RS:0;jenkins-hbase4:41293] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,041 DEBUG [RS:1;jenkins-hbase4:33769] zookeeper.ZKUtil(162): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:03,041 DEBUG [RS:0;jenkins-hbase4:41293] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,041 DEBUG [RS:0;jenkins-hbase4:41293] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,041 DEBUG [RS:0;jenkins-hbase4:41293] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,041 DEBUG [RS:0;jenkins-hbase4:41293] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,042 DEBUG [RS:0;jenkins-hbase4:41293] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,042 DEBUG [RS:0;jenkins-hbase4:41293] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 07:15:03,042 DEBUG [RS:0;jenkins-hbase4:41293] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,042 DEBUG [RS:0;jenkins-hbase4:41293] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,042 DEBUG [RS:2;jenkins-hbase4:39465] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 07:15:03,043 DEBUG [RS:0;jenkins-hbase4:41293] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,042 DEBUG [RS:1;jenkins-hbase4:33769] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 07:15:03,043 DEBUG [RS:0;jenkins-hbase4:41293] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,043 INFO [RS:2;jenkins-hbase4:39465] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 07:15:03,044 INFO [RS:1;jenkins-hbase4:33769] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 07:15:03,052 INFO [RS:0;jenkins-hbase4:41293] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,052 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 07:15:03,052 INFO [RS:0;jenkins-hbase4:41293] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,052 INFO [RS:0;jenkins-hbase4:41293] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,052 INFO [RS:2;jenkins-hbase4:39465] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 07:15:03,059 INFO [RS:2;jenkins-hbase4:39465] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 07:15:03,059 INFO [RS:2;jenkins-hbase4:39465] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,060 INFO [RS:2;jenkins-hbase4:39465] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 07:15:03,062 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 07:15:03,063 INFO [RS:1;jenkins-hbase4:33769] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 07:15:03,063 INFO [RS:2;jenkins-hbase4:39465] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,063 DEBUG [RS:2;jenkins-hbase4:39465] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,064 INFO [RS:1;jenkins-hbase4:33769] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 07:15:03,064 DEBUG [RS:2;jenkins-hbase4:39465] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,064 INFO [RS:1;jenkins-hbase4:33769] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,064 DEBUG [RS:2;jenkins-hbase4:39465] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,064 DEBUG [RS:2;jenkins-hbase4:39465] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,065 INFO [RS:1;jenkins-hbase4:33769] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 07:15:03,065 DEBUG [RS:2;jenkins-hbase4:39465] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,065 DEBUG [RS:2;jenkins-hbase4:39465] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 07:15:03,065 DEBUG [RS:2;jenkins-hbase4:39465] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,066 DEBUG [RS:2;jenkins-hbase4:39465] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,066 DEBUG [RS:2;jenkins-hbase4:39465] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,066 DEBUG [RS:2;jenkins-hbase4:39465] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,067 INFO [RS:2;jenkins-hbase4:39465] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,067 INFO [RS:1;jenkins-hbase4:33769] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,067 INFO [RS:2;jenkins-hbase4:39465] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,067 INFO [RS:2;jenkins-hbase4:39465] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,067 DEBUG [RS:1;jenkins-hbase4:33769] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,067 DEBUG [RS:1;jenkins-hbase4:33769] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,068 DEBUG [RS:1;jenkins-hbase4:33769] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,068 DEBUG [RS:1;jenkins-hbase4:33769] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,068 DEBUG [RS:1;jenkins-hbase4:33769] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,068 DEBUG [RS:1;jenkins-hbase4:33769] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 07:15:03,068 DEBUG [RS:1;jenkins-hbase4:33769] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,068 DEBUG [RS:1;jenkins-hbase4:33769] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,068 DEBUG [RS:1;jenkins-hbase4:33769] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,068 DEBUG [RS:1;jenkins-hbase4:33769] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:03,069 INFO [RS:1;jenkins-hbase4:33769] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,069 INFO [RS:1;jenkins-hbase4:33769] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,069 INFO [RS:1;jenkins-hbase4:33769] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,081 INFO [RS:2;jenkins-hbase4:39465] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 07:15:03,081 INFO [RS:0;jenkins-hbase4:41293] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 07:15:03,085 INFO [RS:1;jenkins-hbase4:33769] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 07:15:03,087 INFO [RS:0;jenkins-hbase4:41293] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41293,1689664501013-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,087 INFO [RS:1;jenkins-hbase4:33769] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33769,1689664501155-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,087 INFO [RS:2;jenkins-hbase4:39465] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39465,1689664501221-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,116 INFO [RS:1;jenkins-hbase4:33769] regionserver.Replication(203): jenkins-hbase4.apache.org,33769,1689664501155 started 2023-07-18 07:15:03,116 INFO [RS:2;jenkins-hbase4:39465] regionserver.Replication(203): jenkins-hbase4.apache.org,39465,1689664501221 started 2023-07-18 07:15:03,116 INFO [RS:1;jenkins-hbase4:33769] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33769,1689664501155, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33769, sessionid=0x1017748aa600002 2023-07-18 07:15:03,116 INFO [RS:2;jenkins-hbase4:39465] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39465,1689664501221, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39465, sessionid=0x1017748aa600003 2023-07-18 07:15:03,116 INFO [RS:0;jenkins-hbase4:41293] regionserver.Replication(203): jenkins-hbase4.apache.org,41293,1689664501013 started 2023-07-18 07:15:03,116 INFO [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41293,1689664501013, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41293, sessionid=0x1017748aa600001 2023-07-18 07:15:03,116 DEBUG [RS:1;jenkins-hbase4:33769] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 07:15:03,117 DEBUG [RS:0;jenkins-hbase4:41293] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 07:15:03,117 DEBUG [RS:2;jenkins-hbase4:39465] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 07:15:03,117 DEBUG [RS:0;jenkins-hbase4:41293] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:03,117 DEBUG [RS:1;jenkins-hbase4:33769] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:03,118 DEBUG [RS:0;jenkins-hbase4:41293] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41293,1689664501013' 2023-07-18 07:15:03,118 DEBUG [RS:1;jenkins-hbase4:33769] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33769,1689664501155' 2023-07-18 07:15:03,118 DEBUG [RS:1;jenkins-hbase4:33769] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 07:15:03,117 DEBUG [RS:2;jenkins-hbase4:39465] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:03,118 DEBUG [RS:0;jenkins-hbase4:41293] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 07:15:03,118 DEBUG [RS:2;jenkins-hbase4:39465] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39465,1689664501221' 2023-07-18 07:15:03,118 DEBUG [RS:2;jenkins-hbase4:39465] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 07:15:03,119 DEBUG [RS:1;jenkins-hbase4:33769] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 07:15:03,119 DEBUG [RS:0;jenkins-hbase4:41293] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 07:15:03,119 DEBUG [RS:2;jenkins-hbase4:39465] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 07:15:03,120 DEBUG [RS:1;jenkins-hbase4:33769] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 07:15:03,120 DEBUG [RS:1;jenkins-hbase4:33769] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 07:15:03,120 DEBUG [RS:1;jenkins-hbase4:33769] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:03,120 DEBUG [RS:1;jenkins-hbase4:33769] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33769,1689664501155' 2023-07-18 07:15:03,120 DEBUG [RS:1;jenkins-hbase4:33769] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 07:15:03,121 DEBUG [RS:2;jenkins-hbase4:39465] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 07:15:03,121 DEBUG [RS:2;jenkins-hbase4:39465] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 07:15:03,121 DEBUG [RS:0;jenkins-hbase4:41293] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 07:15:03,122 DEBUG [RS:0;jenkins-hbase4:41293] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 07:15:03,122 DEBUG [RS:0;jenkins-hbase4:41293] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:03,122 DEBUG [RS:1;jenkins-hbase4:33769] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 07:15:03,122 DEBUG [RS:0;jenkins-hbase4:41293] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41293,1689664501013' 2023-07-18 07:15:03,122 DEBUG [RS:0;jenkins-hbase4:41293] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 07:15:03,121 DEBUG [RS:2;jenkins-hbase4:39465] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:03,123 DEBUG [RS:2;jenkins-hbase4:39465] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39465,1689664501221' 2023-07-18 07:15:03,123 DEBUG [RS:2;jenkins-hbase4:39465] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 07:15:03,123 DEBUG [RS:1;jenkins-hbase4:33769] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 07:15:03,123 DEBUG [RS:0;jenkins-hbase4:41293] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 07:15:03,123 DEBUG [RS:2;jenkins-hbase4:39465] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 07:15:03,123 INFO [RS:1;jenkins-hbase4:33769] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 07:15:03,123 INFO [RS:1;jenkins-hbase4:33769] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 07:15:03,124 DEBUG [RS:0;jenkins-hbase4:41293] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 07:15:03,124 INFO [RS:0;jenkins-hbase4:41293] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 07:15:03,124 DEBUG [RS:2;jenkins-hbase4:39465] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 07:15:03,124 INFO [RS:0;jenkins-hbase4:41293] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 07:15:03,125 INFO [RS:2;jenkins-hbase4:39465] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 07:15:03,125 INFO [RS:2;jenkins-hbase4:39465] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 07:15:03,215 DEBUG [jenkins-hbase4:33141] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 07:15:03,235 DEBUG [jenkins-hbase4:33141] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:03,236 DEBUG [jenkins-hbase4:33141] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:03,236 DEBUG [jenkins-hbase4:33141] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:03,236 DEBUG [jenkins-hbase4:33141] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:03,237 DEBUG [jenkins-hbase4:33141] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:03,237 INFO [RS:0;jenkins-hbase4:41293] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41293%2C1689664501013, suffix=, logDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/WALs/jenkins-hbase4.apache.org,41293,1689664501013, archiveDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/oldWALs, maxLogs=32 2023-07-18 07:15:03,237 INFO [RS:2;jenkins-hbase4:39465] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39465%2C1689664501221, suffix=, logDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/WALs/jenkins-hbase4.apache.org,39465,1689664501221, archiveDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/oldWALs, maxLogs=32 2023-07-18 07:15:03,242 INFO [RS:1;jenkins-hbase4:33769] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33769%2C1689664501155, suffix=, logDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/WALs/jenkins-hbase4.apache.org,33769,1689664501155, archiveDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/oldWALs, maxLogs=32 2023-07-18 07:15:03,244 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41293,1689664501013, state=OPENING 2023-07-18 07:15:03,253 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 07:15:03,256 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:03,257 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 07:15:03,265 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:03,283 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35973,DS-fba91313-98da-49e7-aca0-d80120d5cf8c,DISK] 2023-07-18 07:15:03,283 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35935,DS-8a96df73-714a-4f6f-97cc-cc27b08692c2,DISK] 2023-07-18 07:15:03,297 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44391,DS-5c4b7b43-3300-44cc-aa7f-e40b05091082,DISK] 2023-07-18 07:15:03,297 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35935,DS-8a96df73-714a-4f6f-97cc-cc27b08692c2,DISK] 2023-07-18 07:15:03,298 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35973,DS-fba91313-98da-49e7-aca0-d80120d5cf8c,DISK] 2023-07-18 07:15:03,298 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44391,DS-5c4b7b43-3300-44cc-aa7f-e40b05091082,DISK] 2023-07-18 07:15:03,303 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35935,DS-8a96df73-714a-4f6f-97cc-cc27b08692c2,DISK] 2023-07-18 07:15:03,303 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44391,DS-5c4b7b43-3300-44cc-aa7f-e40b05091082,DISK] 2023-07-18 07:15:03,303 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35973,DS-fba91313-98da-49e7-aca0-d80120d5cf8c,DISK] 2023-07-18 07:15:03,314 INFO [RS:0;jenkins-hbase4:41293] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/WALs/jenkins-hbase4.apache.org,41293,1689664501013/jenkins-hbase4.apache.org%2C41293%2C1689664501013.1689664503243 2023-07-18 07:15:03,316 INFO [RS:1;jenkins-hbase4:33769] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/WALs/jenkins-hbase4.apache.org,33769,1689664501155/jenkins-hbase4.apache.org%2C33769%2C1689664501155.1689664503252 2023-07-18 07:15:03,316 DEBUG [RS:0;jenkins-hbase4:41293] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35973,DS-fba91313-98da-49e7-aca0-d80120d5cf8c,DISK], DatanodeInfoWithStorage[127.0.0.1:35935,DS-8a96df73-714a-4f6f-97cc-cc27b08692c2,DISK], DatanodeInfoWithStorage[127.0.0.1:44391,DS-5c4b7b43-3300-44cc-aa7f-e40b05091082,DISK]] 2023-07-18 07:15:03,318 INFO [RS:2;jenkins-hbase4:39465] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/WALs/jenkins-hbase4.apache.org,39465,1689664501221/jenkins-hbase4.apache.org%2C39465%2C1689664501221.1689664503252 2023-07-18 07:15:03,322 DEBUG [RS:1;jenkins-hbase4:33769] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35935,DS-8a96df73-714a-4f6f-97cc-cc27b08692c2,DISK], DatanodeInfoWithStorage[127.0.0.1:35973,DS-fba91313-98da-49e7-aca0-d80120d5cf8c,DISK], DatanodeInfoWithStorage[127.0.0.1:44391,DS-5c4b7b43-3300-44cc-aa7f-e40b05091082,DISK]] 2023-07-18 07:15:03,322 DEBUG [RS:2;jenkins-hbase4:39465] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44391,DS-5c4b7b43-3300-44cc-aa7f-e40b05091082,DISK], DatanodeInfoWithStorage[127.0.0.1:35973,DS-fba91313-98da-49e7-aca0-d80120d5cf8c,DISK], DatanodeInfoWithStorage[127.0.0.1:35935,DS-8a96df73-714a-4f6f-97cc-cc27b08692c2,DISK]] 2023-07-18 07:15:03,326 WARN [ReadOnlyZKClient-127.0.0.1:57245@0x3e02c867] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-18 07:15:03,359 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33141,1689664498534] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 07:15:03,362 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58556, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 07:15:03,363 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41293] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:58556 deadline: 1689664563363, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:03,481 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:03,485 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 07:15:03,490 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58568, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 07:15:03,500 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 07:15:03,501 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:03,504 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41293%2C1689664501013.meta, suffix=.meta, logDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/WALs/jenkins-hbase4.apache.org,41293,1689664501013, archiveDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/oldWALs, maxLogs=32 2023-07-18 07:15:03,522 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35935,DS-8a96df73-714a-4f6f-97cc-cc27b08692c2,DISK] 2023-07-18 07:15:03,523 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44391,DS-5c4b7b43-3300-44cc-aa7f-e40b05091082,DISK] 2023-07-18 07:15:03,523 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35973,DS-fba91313-98da-49e7-aca0-d80120d5cf8c,DISK] 2023-07-18 07:15:03,534 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/WALs/jenkins-hbase4.apache.org,41293,1689664501013/jenkins-hbase4.apache.org%2C41293%2C1689664501013.meta.1689664503506.meta 2023-07-18 07:15:03,535 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35935,DS-8a96df73-714a-4f6f-97cc-cc27b08692c2,DISK], DatanodeInfoWithStorage[127.0.0.1:44391,DS-5c4b7b43-3300-44cc-aa7f-e40b05091082,DISK], DatanodeInfoWithStorage[127.0.0.1:35973,DS-fba91313-98da-49e7-aca0-d80120d5cf8c,DISK]] 2023-07-18 07:15:03,535 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:03,537 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 07:15:03,540 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 07:15:03,542 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 07:15:03,548 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 07:15:03,548 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:03,548 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 07:15:03,548 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 07:15:03,554 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 07:15:03,557 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/info 2023-07-18 07:15:03,557 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/info 2023-07-18 07:15:03,558 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 07:15:03,559 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:03,559 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 07:15:03,560 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/rep_barrier 2023-07-18 07:15:03,561 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/rep_barrier 2023-07-18 07:15:03,561 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 07:15:03,562 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:03,562 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 07:15:03,564 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/table 2023-07-18 07:15:03,564 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/table 2023-07-18 07:15:03,564 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 07:15:03,565 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:03,566 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740 2023-07-18 07:15:03,569 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740 2023-07-18 07:15:03,572 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 07:15:03,575 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 07:15:03,576 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10075743360, jitterRate=-0.06162327527999878}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 07:15:03,576 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 07:15:03,588 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689664503477 2023-07-18 07:15:03,608 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 07:15:03,609 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 07:15:03,609 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41293,1689664501013, state=OPEN 2023-07-18 07:15:03,612 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 07:15:03,612 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 07:15:03,616 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 07:15:03,616 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41293,1689664501013 in 347 msec 2023-07-18 07:15:03,621 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 07:15:03,621 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 588 msec 2023-07-18 07:15:03,627 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.1350 sec 2023-07-18 07:15:03,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689664503627, completionTime=-1 2023-07-18 07:15:03,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 07:15:03,627 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 07:15:03,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 07:15:03,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689664563699 2023-07-18 07:15:03,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689664623699 2023-07-18 07:15:03,699 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 71 msec 2023-07-18 07:15:03,715 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33141,1689664498534-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,715 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33141,1689664498534-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,716 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33141,1689664498534-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,717 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:33141, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:03,728 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 07:15:03,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 07:15:03,744 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:03,760 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 07:15:03,764 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:03,767 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 07:15:03,785 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/hbase/namespace/41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:03,788 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/hbase/namespace/41aebbe53986314d2b2440254cc81255 empty. 2023-07-18 07:15:03,788 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/hbase/namespace/41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:03,789 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 07:15:03,827 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:03,829 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 41aebbe53986314d2b2440254cc81255, NAME => 'hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:03,847 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:03,847 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 41aebbe53986314d2b2440254cc81255, disabling compactions & flushes 2023-07-18 07:15:03,848 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:03,848 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:03,848 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. after waiting 0 ms 2023-07-18 07:15:03,848 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:03,848 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:03,848 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 41aebbe53986314d2b2440254cc81255: 2023-07-18 07:15:03,852 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 07:15:03,874 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689664503855"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664503855"}]},"ts":"1689664503855"} 2023-07-18 07:15:03,883 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33141,1689664498534] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:03,885 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33141,1689664498534] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 07:15:03,888 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:03,890 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 07:15:03,894 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:03,895 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd empty. 2023-07-18 07:15:03,896 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:03,896 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 07:15:03,911 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 07:15:03,917 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 07:15:03,925 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:03,926 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664503917"}]},"ts":"1689664503917"} 2023-07-18 07:15:03,928 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 428bd5fcdb04976e830cf8a9b852f2cd, NAME => 'hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:03,936 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 07:15:03,943 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:03,943 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:03,943 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:03,943 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:03,944 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:03,949 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=41aebbe53986314d2b2440254cc81255, ASSIGN}] 2023-07-18 07:15:03,953 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=41aebbe53986314d2b2440254cc81255, ASSIGN 2023-07-18 07:15:03,954 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:03,954 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 428bd5fcdb04976e830cf8a9b852f2cd, disabling compactions & flushes 2023-07-18 07:15:03,954 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:03,954 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:03,954 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. after waiting 0 ms 2023-07-18 07:15:03,954 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:03,954 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:03,954 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 428bd5fcdb04976e830cf8a9b852f2cd: 2023-07-18 07:15:03,956 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=41aebbe53986314d2b2440254cc81255, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33769,1689664501155; forceNewPlan=false, retain=false 2023-07-18 07:15:03,958 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 07:15:03,960 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689664503959"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664503959"}]},"ts":"1689664503959"} 2023-07-18 07:15:03,965 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 07:15:03,966 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 07:15:03,966 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664503966"}]},"ts":"1689664503966"} 2023-07-18 07:15:03,970 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 07:15:03,974 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:03,974 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:03,974 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:03,974 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:03,975 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:03,975 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=428bd5fcdb04976e830cf8a9b852f2cd, ASSIGN}] 2023-07-18 07:15:03,978 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=428bd5fcdb04976e830cf8a9b852f2cd, ASSIGN 2023-07-18 07:15:03,983 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=428bd5fcdb04976e830cf8a9b852f2cd, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41293,1689664501013; forceNewPlan=false, retain=false 2023-07-18 07:15:03,984 INFO [jenkins-hbase4:33141] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-18 07:15:03,987 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=41aebbe53986314d2b2440254cc81255, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:03,987 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=428bd5fcdb04976e830cf8a9b852f2cd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:03,988 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689664503987"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664503987"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664503987"}]},"ts":"1689664503987"} 2023-07-18 07:15:03,988 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689664503987"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664503987"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664503987"}]},"ts":"1689664503987"} 2023-07-18 07:15:03,996 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 41aebbe53986314d2b2440254cc81255, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:03,999 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 428bd5fcdb04976e830cf8a9b852f2cd, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:04,152 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:04,152 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 07:15:04,155 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48984, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 07:15:04,163 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:04,164 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 428bd5fcdb04976e830cf8a9b852f2cd, NAME => 'hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:04,164 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:04,164 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 07:15:04,164 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 41aebbe53986314d2b2440254cc81255, NAME => 'hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:04,164 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. service=MultiRowMutationService 2023-07-18 07:15:04,165 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 07:15:04,166 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:04,166 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:04,166 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:04,166 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:04,166 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:04,166 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:04,166 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:04,166 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:04,170 INFO [StoreOpener-428bd5fcdb04976e830cf8a9b852f2cd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:04,170 INFO [StoreOpener-41aebbe53986314d2b2440254cc81255-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:04,172 DEBUG [StoreOpener-428bd5fcdb04976e830cf8a9b852f2cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd/m 2023-07-18 07:15:04,172 DEBUG [StoreOpener-428bd5fcdb04976e830cf8a9b852f2cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd/m 2023-07-18 07:15:04,172 DEBUG [StoreOpener-41aebbe53986314d2b2440254cc81255-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/namespace/41aebbe53986314d2b2440254cc81255/info 2023-07-18 07:15:04,173 DEBUG [StoreOpener-41aebbe53986314d2b2440254cc81255-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/namespace/41aebbe53986314d2b2440254cc81255/info 2023-07-18 07:15:04,173 INFO [StoreOpener-428bd5fcdb04976e830cf8a9b852f2cd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 428bd5fcdb04976e830cf8a9b852f2cd columnFamilyName m 2023-07-18 07:15:04,173 INFO [StoreOpener-41aebbe53986314d2b2440254cc81255-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 41aebbe53986314d2b2440254cc81255 columnFamilyName info 2023-07-18 07:15:04,174 INFO [StoreOpener-428bd5fcdb04976e830cf8a9b852f2cd-1] regionserver.HStore(310): Store=428bd5fcdb04976e830cf8a9b852f2cd/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:04,175 INFO [StoreOpener-41aebbe53986314d2b2440254cc81255-1] regionserver.HStore(310): Store=41aebbe53986314d2b2440254cc81255/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:04,177 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/namespace/41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:04,177 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:04,178 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/namespace/41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:04,178 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:04,182 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:04,182 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:04,186 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:04,187 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/namespace/41aebbe53986314d2b2440254cc81255/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:04,187 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 428bd5fcdb04976e830cf8a9b852f2cd; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3042686c, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:04,187 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 41aebbe53986314d2b2440254cc81255; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9412679360, jitterRate=-0.12337592244148254}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:04,188 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 428bd5fcdb04976e830cf8a9b852f2cd: 2023-07-18 07:15:04,188 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 41aebbe53986314d2b2440254cc81255: 2023-07-18 07:15:04,190 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255., pid=8, masterSystemTime=1689664504152 2023-07-18 07:15:04,193 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd., pid=9, masterSystemTime=1689664504153 2023-07-18 07:15:04,195 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:04,196 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:04,198 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:04,198 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:04,198 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=41aebbe53986314d2b2440254cc81255, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:04,198 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689664504197"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664504197"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664504197"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664504197"}]},"ts":"1689664504197"} 2023-07-18 07:15:04,199 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=428bd5fcdb04976e830cf8a9b852f2cd, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:04,199 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689664504198"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664504198"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664504198"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664504198"}]},"ts":"1689664504198"} 2023-07-18 07:15:04,206 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-18 07:15:04,207 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 41aebbe53986314d2b2440254cc81255, server=jenkins-hbase4.apache.org,33769,1689664501155 in 205 msec 2023-07-18 07:15:04,210 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-18 07:15:04,211 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 428bd5fcdb04976e830cf8a9b852f2cd, server=jenkins-hbase4.apache.org,41293,1689664501013 in 205 msec 2023-07-18 07:15:04,215 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-18 07:15:04,215 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=41aebbe53986314d2b2440254cc81255, ASSIGN in 258 msec 2023-07-18 07:15:04,219 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 07:15:04,219 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664504219"}]},"ts":"1689664504219"} 2023-07-18 07:15:04,220 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-18 07:15:04,220 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=428bd5fcdb04976e830cf8a9b852f2cd, ASSIGN in 236 msec 2023-07-18 07:15:04,221 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 07:15:04,222 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664504222"}]},"ts":"1689664504222"} 2023-07-18 07:15:04,222 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 07:15:04,224 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 07:15:04,226 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 07:15:04,228 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 07:15:04,231 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 481 msec 2023-07-18 07:15:04,232 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 345 msec 2023-07-18 07:15:04,264 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 07:15:04,266 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 07:15:04,266 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:04,292 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 07:15:04,296 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48992, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 07:15:04,300 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33141,1689664498534] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 07:15:04,300 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33141,1689664498534] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 07:15:04,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 07:15:04,336 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 07:15:04,343 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 41 msec 2023-07-18 07:15:04,348 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 07:15:04,363 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 07:15:04,372 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 21 msec 2023-07-18 07:15:04,393 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 07:15:04,396 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 07:15:04,396 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.080sec 2023-07-18 07:15:04,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-18 07:15:04,400 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 07:15:04,401 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 07:15:04,402 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33141,1689664498534-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 07:15:04,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33141,1689664498534-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 07:15:04,406 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:04,407 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33141,1689664498534] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:04,411 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33141,1689664498534] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 07:15:04,422 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33141,1689664498534] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 07:15:04,423 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 07:15:04,480 DEBUG [Listener at localhost/33473] zookeeper.ReadOnlyZKClient(139): Connect 0x481b1111 to 127.0.0.1:57245 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:04,509 DEBUG [Listener at localhost/33473] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7ec14165, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:04,528 DEBUG [hconnection-0xd65c55c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 07:15:04,541 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58582, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 07:15:04,556 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,33141,1689664498534 2023-07-18 07:15:04,558 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:04,571 DEBUG [Listener at localhost/33473] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 07:15:04,576 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52448, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 07:15:04,596 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 07:15:04,596 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:04,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 07:15:04,604 DEBUG [Listener at localhost/33473] zookeeper.ReadOnlyZKClient(139): Connect 0x51c4e0c8 to 127.0.0.1:57245 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:04,610 DEBUG [Listener at localhost/33473] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6555e2ba, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:04,610 INFO [Listener at localhost/33473] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:57245 2023-07-18 07:15:04,616 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:04,617 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1017748aa60000a connected 2023-07-18 07:15:04,650 INFO [Listener at localhost/33473] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=425, OpenFileDescriptor=675, MaxFileDescriptor=60000, SystemLoadAverage=517, ProcessCount=176, AvailableMemoryMB=3055 2023-07-18 07:15:04,653 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-18 07:15:04,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:04,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:04,772 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-18 07:15:04,791 INFO [Listener at localhost/33473] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 07:15:04,792 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:04,792 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:04,792 INFO [Listener at localhost/33473] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 07:15:04,792 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:04,792 INFO [Listener at localhost/33473] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 07:15:04,793 INFO [Listener at localhost/33473] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 07:15:04,795 INFO [Listener at localhost/33473] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42375 2023-07-18 07:15:04,796 INFO [Listener at localhost/33473] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 07:15:04,798 DEBUG [Listener at localhost/33473] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 07:15:04,800 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:04,806 INFO [Listener at localhost/33473] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:04,809 INFO [Listener at localhost/33473] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42375 connecting to ZooKeeper ensemble=127.0.0.1:57245 2023-07-18 07:15:04,816 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:423750x0, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:04,819 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42375-0x1017748aa60000b connected 2023-07-18 07:15:04,819 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(162): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 07:15:04,821 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(162): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-18 07:15:04,822 DEBUG [Listener at localhost/33473] zookeeper.ZKUtil(164): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 07:15:04,825 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42375 2023-07-18 07:15:04,826 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42375 2023-07-18 07:15:04,827 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42375 2023-07-18 07:15:04,828 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42375 2023-07-18 07:15:04,828 DEBUG [Listener at localhost/33473] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42375 2023-07-18 07:15:04,831 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 07:15:04,831 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 07:15:04,831 INFO [Listener at localhost/33473] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 07:15:04,832 INFO [Listener at localhost/33473] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 07:15:04,832 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 07:15:04,832 INFO [Listener at localhost/33473] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 07:15:04,833 INFO [Listener at localhost/33473] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 07:15:04,833 INFO [Listener at localhost/33473] http.HttpServer(1146): Jetty bound to port 45257 2023-07-18 07:15:04,833 INFO [Listener at localhost/33473] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:04,837 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:04,837 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@264de25c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/hadoop.log.dir/,AVAILABLE} 2023-07-18 07:15:04,838 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:04,838 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2ad20967{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 07:15:04,848 INFO [Listener at localhost/33473] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 07:15:04,849 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 07:15:04,849 INFO [Listener at localhost/33473] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 07:15:04,850 INFO [Listener at localhost/33473] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 07:15:04,851 INFO [Listener at localhost/33473] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:04,853 INFO [Listener at localhost/33473] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@22f99e4c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:04,854 INFO [Listener at localhost/33473] server.AbstractConnector(333): Started ServerConnector@251bffae{HTTP/1.1, (http/1.1)}{0.0.0.0:45257} 2023-07-18 07:15:04,855 INFO [Listener at localhost/33473] server.Server(415): Started @12423ms 2023-07-18 07:15:04,862 INFO [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(951): ClusterId : 456d955b-b6e4-4117-84e7-1f3d706ecbe3 2023-07-18 07:15:04,862 DEBUG [RS:3;jenkins-hbase4:42375] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 07:15:04,866 DEBUG [RS:3;jenkins-hbase4:42375] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 07:15:04,866 DEBUG [RS:3;jenkins-hbase4:42375] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 07:15:04,868 DEBUG [RS:3;jenkins-hbase4:42375] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 07:15:04,870 DEBUG [RS:3;jenkins-hbase4:42375] zookeeper.ReadOnlyZKClient(139): Connect 0x39cbd810 to 127.0.0.1:57245 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:04,896 DEBUG [RS:3;jenkins-hbase4:42375] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@370c3159, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:04,896 DEBUG [RS:3;jenkins-hbase4:42375] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3333b360, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 07:15:04,909 DEBUG [RS:3;jenkins-hbase4:42375] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:42375 2023-07-18 07:15:04,909 INFO [RS:3;jenkins-hbase4:42375] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 07:15:04,910 INFO [RS:3;jenkins-hbase4:42375] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 07:15:04,910 DEBUG [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 07:15:04,911 INFO [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33141,1689664498534 with isa=jenkins-hbase4.apache.org/172.31.14.131:42375, startcode=1689664504791 2023-07-18 07:15:04,912 DEBUG [RS:3;jenkins-hbase4:42375] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 07:15:04,917 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35297, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 07:15:04,918 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33141] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:04,918 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33141,1689664498534] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 07:15:04,918 DEBUG [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9 2023-07-18 07:15:04,918 DEBUG [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42711 2023-07-18 07:15:04,919 DEBUG [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=41961 2023-07-18 07:15:04,925 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:04,925 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:04,925 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:04,925 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:04,926 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33141,1689664498534] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:04,926 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42375,1689664504791] 2023-07-18 07:15:04,927 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:04,928 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:04,927 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33141,1689664498534] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 07:15:04,928 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:04,932 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:04,933 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:04,933 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:04,933 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:04,934 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:04,933 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33141,1689664498534] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-18 07:15:04,934 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:04,933 DEBUG [RS:3;jenkins-hbase4:42375] zookeeper.ZKUtil(162): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:04,934 WARN [RS:3;jenkins-hbase4:42375] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 07:15:04,935 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:04,935 INFO [RS:3;jenkins-hbase4:42375] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:04,935 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:04,935 DEBUG [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/WALs/jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:04,936 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:04,941 DEBUG [RS:3;jenkins-hbase4:42375] zookeeper.ZKUtil(162): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:04,942 DEBUG [RS:3;jenkins-hbase4:42375] zookeeper.ZKUtil(162): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:04,942 DEBUG [RS:3;jenkins-hbase4:42375] zookeeper.ZKUtil(162): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:04,943 DEBUG [RS:3;jenkins-hbase4:42375] zookeeper.ZKUtil(162): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:04,944 DEBUG [RS:3;jenkins-hbase4:42375] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 07:15:04,944 INFO [RS:3;jenkins-hbase4:42375] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 07:15:04,946 INFO [RS:3;jenkins-hbase4:42375] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 07:15:04,946 INFO [RS:3;jenkins-hbase4:42375] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 07:15:04,946 INFO [RS:3;jenkins-hbase4:42375] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:04,956 INFO [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 07:15:04,959 INFO [RS:3;jenkins-hbase4:42375] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:04,959 DEBUG [RS:3;jenkins-hbase4:42375] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:04,959 DEBUG [RS:3;jenkins-hbase4:42375] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:04,959 DEBUG [RS:3;jenkins-hbase4:42375] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:04,959 DEBUG [RS:3;jenkins-hbase4:42375] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:04,959 DEBUG [RS:3;jenkins-hbase4:42375] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:04,959 DEBUG [RS:3;jenkins-hbase4:42375] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 07:15:04,960 DEBUG [RS:3;jenkins-hbase4:42375] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:04,960 DEBUG [RS:3;jenkins-hbase4:42375] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:04,960 DEBUG [RS:3;jenkins-hbase4:42375] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:04,960 DEBUG [RS:3;jenkins-hbase4:42375] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:04,964 INFO [RS:3;jenkins-hbase4:42375] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:04,964 INFO [RS:3;jenkins-hbase4:42375] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:04,964 INFO [RS:3;jenkins-hbase4:42375] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:04,985 INFO [RS:3;jenkins-hbase4:42375] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 07:15:04,985 INFO [RS:3;jenkins-hbase4:42375] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42375,1689664504791-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:05,001 INFO [RS:3;jenkins-hbase4:42375] regionserver.Replication(203): jenkins-hbase4.apache.org,42375,1689664504791 started 2023-07-18 07:15:05,001 INFO [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42375,1689664504791, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42375, sessionid=0x1017748aa60000b 2023-07-18 07:15:05,001 DEBUG [RS:3;jenkins-hbase4:42375] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 07:15:05,001 DEBUG [RS:3;jenkins-hbase4:42375] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:05,001 DEBUG [RS:3;jenkins-hbase4:42375] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42375,1689664504791' 2023-07-18 07:15:05,001 DEBUG [RS:3;jenkins-hbase4:42375] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 07:15:05,002 DEBUG [RS:3;jenkins-hbase4:42375] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 07:15:05,002 DEBUG [RS:3;jenkins-hbase4:42375] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 07:15:05,002 DEBUG [RS:3;jenkins-hbase4:42375] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 07:15:05,002 DEBUG [RS:3;jenkins-hbase4:42375] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:05,002 DEBUG [RS:3;jenkins-hbase4:42375] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42375,1689664504791' 2023-07-18 07:15:05,002 DEBUG [RS:3;jenkins-hbase4:42375] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 07:15:05,003 DEBUG [RS:3;jenkins-hbase4:42375] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 07:15:05,003 DEBUG [RS:3;jenkins-hbase4:42375] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 07:15:05,003 INFO [RS:3;jenkins-hbase4:42375] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 07:15:05,003 INFO [RS:3;jenkins-hbase4:42375] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 07:15:05,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:05,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:05,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:05,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:05,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:05,024 DEBUG [hconnection-0x320932da-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 07:15:05,029 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58596, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 07:15:05,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:05,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:05,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:05,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:05,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:52448 deadline: 1689665705052, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:05,055 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:05,060 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:05,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:05,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:05,063 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:05,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:05,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:05,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:05,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:05,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:05,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:05,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:05,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:05,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:05,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:05,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:05,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:05,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:33769] to rsgroup Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:05,107 INFO [RS:3;jenkins-hbase4:42375] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42375%2C1689664504791, suffix=, logDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/WALs/jenkins-hbase4.apache.org,42375,1689664504791, archiveDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/oldWALs, maxLogs=32 2023-07-18 07:15:05,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:05,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:05,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:05,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:05,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(238): Moving server region 41aebbe53986314d2b2440254cc81255, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:05,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=41aebbe53986314d2b2440254cc81255, REOPEN/MOVE 2023-07-18 07:15:05,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 07:15:05,121 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=41aebbe53986314d2b2440254cc81255, REOPEN/MOVE 2023-07-18 07:15:05,128 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=41aebbe53986314d2b2440254cc81255, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:05,129 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689664505128"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664505128"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664505128"}]},"ts":"1689664505128"} 2023-07-18 07:15:05,138 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure 41aebbe53986314d2b2440254cc81255, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:05,139 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44391,DS-5c4b7b43-3300-44cc-aa7f-e40b05091082,DISK] 2023-07-18 07:15:05,148 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35935,DS-8a96df73-714a-4f6f-97cc-cc27b08692c2,DISK] 2023-07-18 07:15:05,149 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35973,DS-fba91313-98da-49e7-aca0-d80120d5cf8c,DISK] 2023-07-18 07:15:05,161 INFO [RS:3;jenkins-hbase4:42375] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/WALs/jenkins-hbase4.apache.org,42375,1689664504791/jenkins-hbase4.apache.org%2C42375%2C1689664504791.1689664505108 2023-07-18 07:15:05,162 DEBUG [RS:3;jenkins-hbase4:42375] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44391,DS-5c4b7b43-3300-44cc-aa7f-e40b05091082,DISK], DatanodeInfoWithStorage[127.0.0.1:35973,DS-fba91313-98da-49e7-aca0-d80120d5cf8c,DISK], DatanodeInfoWithStorage[127.0.0.1:35935,DS-8a96df73-714a-4f6f-97cc-cc27b08692c2,DISK]] 2023-07-18 07:15:05,316 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:05,317 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 41aebbe53986314d2b2440254cc81255, disabling compactions & flushes 2023-07-18 07:15:05,317 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:05,317 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:05,317 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. after waiting 0 ms 2023-07-18 07:15:05,317 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:05,318 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 41aebbe53986314d2b2440254cc81255 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-18 07:15:05,437 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/namespace/41aebbe53986314d2b2440254cc81255/.tmp/info/a95dd98fe16d46759c73f6c558a32ece 2023-07-18 07:15:05,496 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/namespace/41aebbe53986314d2b2440254cc81255/.tmp/info/a95dd98fe16d46759c73f6c558a32ece as hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/namespace/41aebbe53986314d2b2440254cc81255/info/a95dd98fe16d46759c73f6c558a32ece 2023-07-18 07:15:05,514 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/namespace/41aebbe53986314d2b2440254cc81255/info/a95dd98fe16d46759c73f6c558a32ece, entries=2, sequenceid=6, filesize=4.8 K 2023-07-18 07:15:05,518 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 41aebbe53986314d2b2440254cc81255 in 199ms, sequenceid=6, compaction requested=false 2023-07-18 07:15:05,520 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-18 07:15:05,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/namespace/41aebbe53986314d2b2440254cc81255/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-18 07:15:05,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:05,537 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 41aebbe53986314d2b2440254cc81255: 2023-07-18 07:15:05,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 41aebbe53986314d2b2440254cc81255 move to jenkins-hbase4.apache.org,42375,1689664504791 record at close sequenceid=6 2023-07-18 07:15:05,542 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:05,544 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=41aebbe53986314d2b2440254cc81255, regionState=CLOSED 2023-07-18 07:15:05,545 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689664505544"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664505544"}]},"ts":"1689664505544"} 2023-07-18 07:15:05,558 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-18 07:15:05,558 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure 41aebbe53986314d2b2440254cc81255, server=jenkins-hbase4.apache.org,33769,1689664501155 in 414 msec 2023-07-18 07:15:05,559 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=41aebbe53986314d2b2440254cc81255, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42375,1689664504791; forceNewPlan=false, retain=false 2023-07-18 07:15:05,710 INFO [jenkins-hbase4:33141] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 07:15:05,710 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=41aebbe53986314d2b2440254cc81255, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:05,710 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689664505710"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664505710"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664505710"}]},"ts":"1689664505710"} 2023-07-18 07:15:05,713 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure 41aebbe53986314d2b2440254cc81255, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:05,868 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:05,868 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 07:15:05,874 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47624, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 07:15:05,887 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:05,887 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 41aebbe53986314d2b2440254cc81255, NAME => 'hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:05,888 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:05,888 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:05,888 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:05,888 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:05,892 INFO [StoreOpener-41aebbe53986314d2b2440254cc81255-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:05,893 DEBUG [StoreOpener-41aebbe53986314d2b2440254cc81255-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/namespace/41aebbe53986314d2b2440254cc81255/info 2023-07-18 07:15:05,894 DEBUG [StoreOpener-41aebbe53986314d2b2440254cc81255-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/namespace/41aebbe53986314d2b2440254cc81255/info 2023-07-18 07:15:05,894 INFO [StoreOpener-41aebbe53986314d2b2440254cc81255-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 41aebbe53986314d2b2440254cc81255 columnFamilyName info 2023-07-18 07:15:05,919 DEBUG [StoreOpener-41aebbe53986314d2b2440254cc81255-1] regionserver.HStore(539): loaded hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/namespace/41aebbe53986314d2b2440254cc81255/info/a95dd98fe16d46759c73f6c558a32ece 2023-07-18 07:15:05,920 INFO [StoreOpener-41aebbe53986314d2b2440254cc81255-1] regionserver.HStore(310): Store=41aebbe53986314d2b2440254cc81255/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:05,921 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/namespace/41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:05,925 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/namespace/41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:05,930 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:05,931 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 41aebbe53986314d2b2440254cc81255; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10563976480, jitterRate=-0.01615302264690399}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:05,931 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 41aebbe53986314d2b2440254cc81255: 2023-07-18 07:15:05,933 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255., pid=14, masterSystemTime=1689664505868 2023-07-18 07:15:05,938 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:05,938 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:05,940 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=41aebbe53986314d2b2440254cc81255, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:05,940 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689664505939"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664505939"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664505939"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664505939"}]},"ts":"1689664505939"} 2023-07-18 07:15:05,946 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-18 07:15:05,946 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure 41aebbe53986314d2b2440254cc81255, server=jenkins-hbase4.apache.org,42375,1689664504791 in 230 msec 2023-07-18 07:15:05,948 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=41aebbe53986314d2b2440254cc81255, REOPEN/MOVE in 830 msec 2023-07-18 07:15:06,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-18 07:15:06,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33769,1689664501155, jenkins-hbase4.apache.org,39465,1689664501221] are moved back to default 2023-07-18 07:15:06,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:06,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:06,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:06,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:06,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:06,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:06,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:06,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:06,142 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:06,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 15 2023-07-18 07:15:06,147 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:06,147 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:06,148 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:06,148 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:06,154 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 07:15:06,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 07:15:06,160 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:06,161 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:06,161 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:06,161 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:06,162 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990 empty. 2023-07-18 07:15:06,162 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:06,163 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b empty. 2023-07-18 07:15:06,163 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81 empty. 2023-07-18 07:15:06,163 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc empty. 2023-07-18 07:15:06,163 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:06,163 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903 empty. 2023-07-18 07:15:06,163 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:06,164 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:06,164 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:06,164 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:06,164 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 07:15:06,189 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:06,191 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4c6f754bd010218aa0c0fabc1a8cc990, NAME => 'Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:06,191 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 22bc5634eea3de71f9212e72bf460c81, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:06,191 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => ab78d55de4d49c9d3b27c485f00ed06b, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:06,257 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:06,260 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 22bc5634eea3de71f9212e72bf460c81, disabling compactions & flushes 2023-07-18 07:15:06,260 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:06,260 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:06,260 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. after waiting 0 ms 2023-07-18 07:15:06,260 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:06,260 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:06,260 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 22bc5634eea3de71f9212e72bf460c81: 2023-07-18 07:15:06,261 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 8b608539cf8446d01fa500dcdca355fc, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:06,266 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:06,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 07:15:06,269 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 4c6f754bd010218aa0c0fabc1a8cc990, disabling compactions & flushes 2023-07-18 07:15:06,272 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:06,272 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:06,272 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. after waiting 0 ms 2023-07-18 07:15:06,272 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:06,272 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:06,272 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 4c6f754bd010218aa0c0fabc1a8cc990: 2023-07-18 07:15:06,273 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 620b80ec7fc8a949d96f67128c493903, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:06,299 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:06,299 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing ab78d55de4d49c9d3b27c485f00ed06b, disabling compactions & flushes 2023-07-18 07:15:06,299 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:06,299 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:06,299 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. after waiting 0 ms 2023-07-18 07:15:06,299 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:06,299 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:06,299 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for ab78d55de4d49c9d3b27c485f00ed06b: 2023-07-18 07:15:06,336 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:06,337 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 8b608539cf8446d01fa500dcdca355fc, disabling compactions & flushes 2023-07-18 07:15:06,337 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:06,337 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:06,337 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. after waiting 0 ms 2023-07-18 07:15:06,337 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:06,337 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:06,337 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 8b608539cf8446d01fa500dcdca355fc: 2023-07-18 07:15:06,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 07:15:06,742 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:06,742 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 620b80ec7fc8a949d96f67128c493903, disabling compactions & flushes 2023-07-18 07:15:06,742 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:06,742 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:06,742 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. after waiting 0 ms 2023-07-18 07:15:06,742 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:06,742 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:06,742 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 620b80ec7fc8a949d96f67128c493903: 2023-07-18 07:15:06,749 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 07:15:06,750 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664506750"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664506750"}]},"ts":"1689664506750"} 2023-07-18 07:15:06,750 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664506750"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664506750"}]},"ts":"1689664506750"} 2023-07-18 07:15:06,750 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664506750"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664506750"}]},"ts":"1689664506750"} 2023-07-18 07:15:06,750 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664506750"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664506750"}]},"ts":"1689664506750"} 2023-07-18 07:15:06,751 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664506750"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664506750"}]},"ts":"1689664506750"} 2023-07-18 07:15:06,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 07:15:06,798 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 07:15:06,800 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 07:15:06,800 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664506800"}]},"ts":"1689664506800"} 2023-07-18 07:15:06,802 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-18 07:15:06,812 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:06,813 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:06,813 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:06,813 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:06,813 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6f754bd010218aa0c0fabc1a8cc990, ASSIGN}, {pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22bc5634eea3de71f9212e72bf460c81, ASSIGN}, {pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab78d55de4d49c9d3b27c485f00ed06b, ASSIGN}, {pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b608539cf8446d01fa500dcdca355fc, ASSIGN}, {pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=620b80ec7fc8a949d96f67128c493903, ASSIGN}] 2023-07-18 07:15:06,819 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=620b80ec7fc8a949d96f67128c493903, ASSIGN 2023-07-18 07:15:06,819 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab78d55de4d49c9d3b27c485f00ed06b, ASSIGN 2023-07-18 07:15:06,820 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b608539cf8446d01fa500dcdca355fc, ASSIGN 2023-07-18 07:15:06,820 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22bc5634eea3de71f9212e72bf460c81, ASSIGN 2023-07-18 07:15:06,821 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6f754bd010218aa0c0fabc1a8cc990, ASSIGN 2023-07-18 07:15:06,821 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=620b80ec7fc8a949d96f67128c493903, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42375,1689664504791; forceNewPlan=false, retain=false 2023-07-18 07:15:06,821 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab78d55de4d49c9d3b27c485f00ed06b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42375,1689664504791; forceNewPlan=false, retain=false 2023-07-18 07:15:06,821 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b608539cf8446d01fa500dcdca355fc, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41293,1689664501013; forceNewPlan=false, retain=false 2023-07-18 07:15:06,821 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22bc5634eea3de71f9212e72bf460c81, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42375,1689664504791; forceNewPlan=false, retain=false 2023-07-18 07:15:06,822 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6f754bd010218aa0c0fabc1a8cc990, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41293,1689664501013; forceNewPlan=false, retain=false 2023-07-18 07:15:06,971 INFO [jenkins-hbase4:33141] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 07:15:06,975 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=ab78d55de4d49c9d3b27c485f00ed06b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:06,975 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=8b608539cf8446d01fa500dcdca355fc, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:06,975 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=4c6f754bd010218aa0c0fabc1a8cc990, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:06,975 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=22bc5634eea3de71f9212e72bf460c81, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:06,976 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664506975"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664506975"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664506975"}]},"ts":"1689664506975"} 2023-07-18 07:15:06,976 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664506975"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664506975"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664506975"}]},"ts":"1689664506975"} 2023-07-18 07:15:06,975 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=620b80ec7fc8a949d96f67128c493903, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:06,977 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664506975"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664506975"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664506975"}]},"ts":"1689664506975"} 2023-07-18 07:15:06,976 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664506975"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664506975"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664506975"}]},"ts":"1689664506975"} 2023-07-18 07:15:06,975 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664506975"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664506975"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664506975"}]},"ts":"1689664506975"} 2023-07-18 07:15:06,980 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=17, state=RUNNABLE; OpenRegionProcedure 22bc5634eea3de71f9212e72bf460c81, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:06,981 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=19, state=RUNNABLE; OpenRegionProcedure 8b608539cf8446d01fa500dcdca355fc, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:06,982 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=23, ppid=20, state=RUNNABLE; OpenRegionProcedure 620b80ec7fc8a949d96f67128c493903, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:06,984 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=16, state=RUNNABLE; OpenRegionProcedure 4c6f754bd010218aa0c0fabc1a8cc990, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:06,987 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=18, state=RUNNABLE; OpenRegionProcedure ab78d55de4d49c9d3b27c485f00ed06b, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:07,140 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:07,141 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:07,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 22bc5634eea3de71f9212e72bf460c81, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 07:15:07,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4c6f754bd010218aa0c0fabc1a8cc990, NAME => 'Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 07:15:07,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:07,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:07,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:07,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:07,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:07,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:07,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:07,142 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:07,147 INFO [StoreOpener-22bc5634eea3de71f9212e72bf460c81-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:07,147 INFO [StoreOpener-4c6f754bd010218aa0c0fabc1a8cc990-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:07,149 DEBUG [StoreOpener-22bc5634eea3de71f9212e72bf460c81-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81/f 2023-07-18 07:15:07,149 DEBUG [StoreOpener-22bc5634eea3de71f9212e72bf460c81-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81/f 2023-07-18 07:15:07,149 DEBUG [StoreOpener-4c6f754bd010218aa0c0fabc1a8cc990-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990/f 2023-07-18 07:15:07,149 DEBUG [StoreOpener-4c6f754bd010218aa0c0fabc1a8cc990-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990/f 2023-07-18 07:15:07,149 INFO [StoreOpener-22bc5634eea3de71f9212e72bf460c81-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 22bc5634eea3de71f9212e72bf460c81 columnFamilyName f 2023-07-18 07:15:07,150 INFO [StoreOpener-4c6f754bd010218aa0c0fabc1a8cc990-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4c6f754bd010218aa0c0fabc1a8cc990 columnFamilyName f 2023-07-18 07:15:07,150 INFO [StoreOpener-22bc5634eea3de71f9212e72bf460c81-1] regionserver.HStore(310): Store=22bc5634eea3de71f9212e72bf460c81/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:07,152 INFO [StoreOpener-4c6f754bd010218aa0c0fabc1a8cc990-1] regionserver.HStore(310): Store=4c6f754bd010218aa0c0fabc1a8cc990/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:07,153 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:07,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:07,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:07,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:07,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:07,166 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:07,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:07,172 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 22bc5634eea3de71f9212e72bf460c81; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11102374080, jitterRate=0.033989161252975464}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:07,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 22bc5634eea3de71f9212e72bf460c81: 2023-07-18 07:15:07,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:07,173 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81., pid=21, masterSystemTime=1689664507134 2023-07-18 07:15:07,175 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4c6f754bd010218aa0c0fabc1a8cc990; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9503283040, jitterRate=-0.11493779718875885}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:07,175 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4c6f754bd010218aa0c0fabc1a8cc990: 2023-07-18 07:15:07,176 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990., pid=24, masterSystemTime=1689664507135 2023-07-18 07:15:07,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:07,177 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:07,177 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:07,177 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 620b80ec7fc8a949d96f67128c493903, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 07:15:07,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:07,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:07,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:07,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:07,178 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=22bc5634eea3de71f9212e72bf460c81, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:07,179 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664507178"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664507178"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664507178"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664507178"}]},"ts":"1689664507178"} 2023-07-18 07:15:07,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:07,180 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:07,180 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:07,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8b608539cf8446d01fa500dcdca355fc, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 07:15:07,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:07,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:07,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:07,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:07,181 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=4c6f754bd010218aa0c0fabc1a8cc990, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:07,191 INFO [StoreOpener-8b608539cf8446d01fa500dcdca355fc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:07,191 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664507181"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664507181"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664507181"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664507181"}]},"ts":"1689664507181"} 2023-07-18 07:15:07,192 INFO [StoreOpener-620b80ec7fc8a949d96f67128c493903-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:07,194 DEBUG [StoreOpener-8b608539cf8446d01fa500dcdca355fc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc/f 2023-07-18 07:15:07,195 DEBUG [StoreOpener-8b608539cf8446d01fa500dcdca355fc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc/f 2023-07-18 07:15:07,196 DEBUG [StoreOpener-620b80ec7fc8a949d96f67128c493903-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903/f 2023-07-18 07:15:07,196 DEBUG [StoreOpener-620b80ec7fc8a949d96f67128c493903-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903/f 2023-07-18 07:15:07,197 INFO [StoreOpener-620b80ec7fc8a949d96f67128c493903-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 620b80ec7fc8a949d96f67128c493903 columnFamilyName f 2023-07-18 07:15:07,198 INFO [StoreOpener-620b80ec7fc8a949d96f67128c493903-1] regionserver.HStore(310): Store=620b80ec7fc8a949d96f67128c493903/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:07,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:07,200 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:07,201 INFO [StoreOpener-8b608539cf8446d01fa500dcdca355fc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8b608539cf8446d01fa500dcdca355fc columnFamilyName f 2023-07-18 07:15:07,201 INFO [StoreOpener-8b608539cf8446d01fa500dcdca355fc-1] regionserver.HStore(310): Store=8b608539cf8446d01fa500dcdca355fc/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:07,206 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=17 2023-07-18 07:15:07,209 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22bc5634eea3de71f9212e72bf460c81, ASSIGN in 393 msec 2023-07-18 07:15:07,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:07,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:07,210 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=17, state=SUCCESS; OpenRegionProcedure 22bc5634eea3de71f9212e72bf460c81, server=jenkins-hbase4.apache.org,42375,1689664504791 in 203 msec 2023-07-18 07:15:07,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:07,211 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=16 2023-07-18 07:15:07,211 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=16, state=SUCCESS; OpenRegionProcedure 4c6f754bd010218aa0c0fabc1a8cc990, server=jenkins-hbase4.apache.org,41293,1689664501013 in 221 msec 2023-07-18 07:15:07,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:07,215 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6f754bd010218aa0c0fabc1a8cc990, ASSIGN in 398 msec 2023-07-18 07:15:07,215 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 620b80ec7fc8a949d96f67128c493903; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9416532160, jitterRate=-0.12301710247993469}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:07,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 620b80ec7fc8a949d96f67128c493903: 2023-07-18 07:15:07,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:07,216 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903., pid=23, masterSystemTime=1689664507134 2023-07-18 07:15:07,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:07,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:07,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:07,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ab78d55de4d49c9d3b27c485f00ed06b, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 07:15:07,220 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=620b80ec7fc8a949d96f67128c493903, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:07,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:07,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:07,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:07,220 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664507220"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664507220"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664507220"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664507220"}]},"ts":"1689664507220"} 2023-07-18 07:15:07,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:07,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:07,221 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8b608539cf8446d01fa500dcdca355fc; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9449110880, jitterRate=-0.11998297274112701}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:07,222 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8b608539cf8446d01fa500dcdca355fc: 2023-07-18 07:15:07,223 INFO [StoreOpener-ab78d55de4d49c9d3b27c485f00ed06b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:07,223 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc., pid=22, masterSystemTime=1689664507135 2023-07-18 07:15:07,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:07,225 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:07,226 DEBUG [StoreOpener-ab78d55de4d49c9d3b27c485f00ed06b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b/f 2023-07-18 07:15:07,226 DEBUG [StoreOpener-ab78d55de4d49c9d3b27c485f00ed06b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b/f 2023-07-18 07:15:07,226 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=8b608539cf8446d01fa500dcdca355fc, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:07,226 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664507226"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664507226"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664507226"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664507226"}]},"ts":"1689664507226"} 2023-07-18 07:15:07,227 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=20 2023-07-18 07:15:07,227 INFO [StoreOpener-ab78d55de4d49c9d3b27c485f00ed06b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ab78d55de4d49c9d3b27c485f00ed06b columnFamilyName f 2023-07-18 07:15:07,228 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=20, state=SUCCESS; OpenRegionProcedure 620b80ec7fc8a949d96f67128c493903, server=jenkins-hbase4.apache.org,42375,1689664504791 in 242 msec 2023-07-18 07:15:07,230 INFO [StoreOpener-ab78d55de4d49c9d3b27c485f00ed06b-1] regionserver.HStore(310): Store=ab78d55de4d49c9d3b27c485f00ed06b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:07,231 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=620b80ec7fc8a949d96f67128c493903, ASSIGN in 414 msec 2023-07-18 07:15:07,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:07,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:07,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:07,237 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=19 2023-07-18 07:15:07,237 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=19, state=SUCCESS; OpenRegionProcedure 8b608539cf8446d01fa500dcdca355fc, server=jenkins-hbase4.apache.org,41293,1689664501013 in 250 msec 2023-07-18 07:15:07,240 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b608539cf8446d01fa500dcdca355fc, ASSIGN in 424 msec 2023-07-18 07:15:07,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:07,241 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ab78d55de4d49c9d3b27c485f00ed06b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10584369920, jitterRate=-0.014253735542297363}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:07,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ab78d55de4d49c9d3b27c485f00ed06b: 2023-07-18 07:15:07,242 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b., pid=25, masterSystemTime=1689664507134 2023-07-18 07:15:07,246 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=ab78d55de4d49c9d3b27c485f00ed06b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:07,246 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664507246"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664507246"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664507246"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664507246"}]},"ts":"1689664507246"} 2023-07-18 07:15:07,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:07,248 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:07,252 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=18 2023-07-18 07:15:07,253 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=18, state=SUCCESS; OpenRegionProcedure ab78d55de4d49c9d3b27c485f00ed06b, server=jenkins-hbase4.apache.org,42375,1689664504791 in 262 msec 2023-07-18 07:15:07,257 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=15 2023-07-18 07:15:07,257 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab78d55de4d49c9d3b27c485f00ed06b, ASSIGN in 440 msec 2023-07-18 07:15:07,260 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 07:15:07,260 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664507260"}]},"ts":"1689664507260"} 2023-07-18 07:15:07,262 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-18 07:15:07,265 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 07:15:07,270 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 1.1270 sec 2023-07-18 07:15:07,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 07:15:07,278 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 15 completed 2023-07-18 07:15:07,279 DEBUG [Listener at localhost/33473] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-18 07:15:07,280 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:07,286 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-18 07:15:07,287 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:07,287 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-18 07:15:07,288 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:07,293 DEBUG [Listener at localhost/33473] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 07:15:07,297 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49002, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 07:15:07,300 DEBUG [Listener at localhost/33473] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 07:15:07,304 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43780, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 07:15:07,305 DEBUG [Listener at localhost/33473] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 07:15:07,312 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58604, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 07:15:07,314 DEBUG [Listener at localhost/33473] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 07:15:07,316 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47626, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 07:15:07,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:07,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:07,334 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:07,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:07,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:07,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:07,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:07,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:07,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:07,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(345): Moving region 4c6f754bd010218aa0c0fabc1a8cc990 to RSGroup Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:07,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:07,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:07,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:07,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:07,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:07,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6f754bd010218aa0c0fabc1a8cc990, REOPEN/MOVE 2023-07-18 07:15:07,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(345): Moving region 22bc5634eea3de71f9212e72bf460c81 to RSGroup Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:07,361 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6f754bd010218aa0c0fabc1a8cc990, REOPEN/MOVE 2023-07-18 07:15:07,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:07,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:07,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:07,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:07,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:07,362 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=4c6f754bd010218aa0c0fabc1a8cc990, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:07,363 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664507362"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664507362"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664507362"}]},"ts":"1689664507362"} 2023-07-18 07:15:07,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22bc5634eea3de71f9212e72bf460c81, REOPEN/MOVE 2023-07-18 07:15:07,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(345): Moving region ab78d55de4d49c9d3b27c485f00ed06b to RSGroup Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:07,364 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22bc5634eea3de71f9212e72bf460c81, REOPEN/MOVE 2023-07-18 07:15:07,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:07,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:07,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:07,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:07,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:07,376 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=22bc5634eea3de71f9212e72bf460c81, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:07,377 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664507376"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664507376"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664507376"}]},"ts":"1689664507376"} 2023-07-18 07:15:07,377 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=26, state=RUNNABLE; CloseRegionProcedure 4c6f754bd010218aa0c0fabc1a8cc990, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:07,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab78d55de4d49c9d3b27c485f00ed06b, REOPEN/MOVE 2023-07-18 07:15:07,377 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(345): Moving region 8b608539cf8446d01fa500dcdca355fc to RSGroup Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:07,379 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab78d55de4d49c9d3b27c485f00ed06b, REOPEN/MOVE 2023-07-18 07:15:07,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:07,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:07,381 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=27, state=RUNNABLE; CloseRegionProcedure 22bc5634eea3de71f9212e72bf460c81, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:07,382 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=ab78d55de4d49c9d3b27c485f00ed06b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:07,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:07,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:07,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:07,382 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664507382"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664507382"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664507382"}]},"ts":"1689664507382"} 2023-07-18 07:15:07,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b608539cf8446d01fa500dcdca355fc, REOPEN/MOVE 2023-07-18 07:15:07,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(345): Moving region 620b80ec7fc8a949d96f67128c493903 to RSGroup Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:07,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:07,387 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b608539cf8446d01fa500dcdca355fc, REOPEN/MOVE 2023-07-18 07:15:07,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:07,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:07,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:07,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:07,395 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=28, state=RUNNABLE; CloseRegionProcedure ab78d55de4d49c9d3b27c485f00ed06b, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:07,396 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=8b608539cf8446d01fa500dcdca355fc, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:07,396 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664507396"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664507396"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664507396"}]},"ts":"1689664507396"} 2023-07-18 07:15:07,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=620b80ec7fc8a949d96f67128c493903, REOPEN/MOVE 2023-07-18 07:15:07,403 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=620b80ec7fc8a949d96f67128c493903, REOPEN/MOVE 2023-07-18 07:15:07,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_847781082, current retry=0 2023-07-18 07:15:07,405 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=31, state=RUNNABLE; CloseRegionProcedure 8b608539cf8446d01fa500dcdca355fc, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:07,406 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=620b80ec7fc8a949d96f67128c493903, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:07,406 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664507406"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664507406"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664507406"}]},"ts":"1689664507406"} 2023-07-18 07:15:07,409 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=32, state=RUNNABLE; CloseRegionProcedure 620b80ec7fc8a949d96f67128c493903, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:07,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:07,538 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8b608539cf8446d01fa500dcdca355fc, disabling compactions & flushes 2023-07-18 07:15:07,538 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:07,538 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:07,538 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. after waiting 0 ms 2023-07-18 07:15:07,538 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:07,545 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:07,546 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:07,546 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:07,546 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8b608539cf8446d01fa500dcdca355fc: 2023-07-18 07:15:07,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 22bc5634eea3de71f9212e72bf460c81, disabling compactions & flushes 2023-07-18 07:15:07,547 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 8b608539cf8446d01fa500dcdca355fc move to jenkins-hbase4.apache.org,33769,1689664501155 record at close sequenceid=2 2023-07-18 07:15:07,547 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:07,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:07,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. after waiting 0 ms 2023-07-18 07:15:07,547 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:07,552 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:07,552 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:07,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4c6f754bd010218aa0c0fabc1a8cc990, disabling compactions & flushes 2023-07-18 07:15:07,553 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:07,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:07,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. after waiting 0 ms 2023-07-18 07:15:07,553 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:07,554 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=8b608539cf8446d01fa500dcdca355fc, regionState=CLOSED 2023-07-18 07:15:07,555 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664507554"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664507554"}]},"ts":"1689664507554"} 2023-07-18 07:15:07,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:07,562 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:07,562 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4c6f754bd010218aa0c0fabc1a8cc990: 2023-07-18 07:15:07,562 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 4c6f754bd010218aa0c0fabc1a8cc990 move to jenkins-hbase4.apache.org,39465,1689664501221 record at close sequenceid=2 2023-07-18 07:15:07,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:07,563 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=31 2023-07-18 07:15:07,563 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=31, state=SUCCESS; CloseRegionProcedure 8b608539cf8446d01fa500dcdca355fc, server=jenkins-hbase4.apache.org,41293,1689664501013 in 152 msec 2023-07-18 07:15:07,564 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:07,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 22bc5634eea3de71f9212e72bf460c81: 2023-07-18 07:15:07,564 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 22bc5634eea3de71f9212e72bf460c81 move to jenkins-hbase4.apache.org,39465,1689664501221 record at close sequenceid=2 2023-07-18 07:15:07,566 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b608539cf8446d01fa500dcdca355fc, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33769,1689664501155; forceNewPlan=false, retain=false 2023-07-18 07:15:07,566 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:07,569 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=4c6f754bd010218aa0c0fabc1a8cc990, regionState=CLOSED 2023-07-18 07:15:07,569 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664507569"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664507569"}]},"ts":"1689664507569"} 2023-07-18 07:15:07,569 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:07,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:07,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 620b80ec7fc8a949d96f67128c493903, disabling compactions & flushes 2023-07-18 07:15:07,571 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:07,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:07,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. after waiting 0 ms 2023-07-18 07:15:07,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:07,572 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=22bc5634eea3de71f9212e72bf460c81, regionState=CLOSED 2023-07-18 07:15:07,573 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664507572"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664507572"}]},"ts":"1689664507572"} 2023-07-18 07:15:07,581 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=26 2023-07-18 07:15:07,581 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=27 2023-07-18 07:15:07,581 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=26, state=SUCCESS; CloseRegionProcedure 4c6f754bd010218aa0c0fabc1a8cc990, server=jenkins-hbase4.apache.org,41293,1689664501013 in 196 msec 2023-07-18 07:15:07,581 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=27, state=SUCCESS; CloseRegionProcedure 22bc5634eea3de71f9212e72bf460c81, server=jenkins-hbase4.apache.org,42375,1689664504791 in 195 msec 2023-07-18 07:15:07,582 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22bc5634eea3de71f9212e72bf460c81, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39465,1689664501221; forceNewPlan=false, retain=false 2023-07-18 07:15:07,582 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6f754bd010218aa0c0fabc1a8cc990, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39465,1689664501221; forceNewPlan=false, retain=false 2023-07-18 07:15:07,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:07,585 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:07,585 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 620b80ec7fc8a949d96f67128c493903: 2023-07-18 07:15:07,585 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 620b80ec7fc8a949d96f67128c493903 move to jenkins-hbase4.apache.org,33769,1689664501155 record at close sequenceid=2 2023-07-18 07:15:07,588 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=620b80ec7fc8a949d96f67128c493903, regionState=CLOSED 2023-07-18 07:15:07,588 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664507588"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664507588"}]},"ts":"1689664507588"} 2023-07-18 07:15:07,591 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:07,591 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:07,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ab78d55de4d49c9d3b27c485f00ed06b, disabling compactions & flushes 2023-07-18 07:15:07,605 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:07,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:07,605 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=32 2023-07-18 07:15:07,605 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=32, state=SUCCESS; CloseRegionProcedure 620b80ec7fc8a949d96f67128c493903, server=jenkins-hbase4.apache.org,42375,1689664504791 in 181 msec 2023-07-18 07:15:07,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. after waiting 0 ms 2023-07-18 07:15:07,606 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:07,606 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=620b80ec7fc8a949d96f67128c493903, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33769,1689664501155; forceNewPlan=false, retain=false 2023-07-18 07:15:07,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:07,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:07,618 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ab78d55de4d49c9d3b27c485f00ed06b: 2023-07-18 07:15:07,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ab78d55de4d49c9d3b27c485f00ed06b move to jenkins-hbase4.apache.org,39465,1689664501221 record at close sequenceid=2 2023-07-18 07:15:07,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:07,621 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=ab78d55de4d49c9d3b27c485f00ed06b, regionState=CLOSED 2023-07-18 07:15:07,621 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664507621"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664507621"}]},"ts":"1689664507621"} 2023-07-18 07:15:07,626 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=28 2023-07-18 07:15:07,626 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=28, state=SUCCESS; CloseRegionProcedure ab78d55de4d49c9d3b27c485f00ed06b, server=jenkins-hbase4.apache.org,42375,1689664504791 in 228 msec 2023-07-18 07:15:07,627 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab78d55de4d49c9d3b27c485f00ed06b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,39465,1689664501221; forceNewPlan=false, retain=false 2023-07-18 07:15:07,716 INFO [jenkins-hbase4:33141] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 07:15:07,717 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=4c6f754bd010218aa0c0fabc1a8cc990, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:07,717 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=8b608539cf8446d01fa500dcdca355fc, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:07,717 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=620b80ec7fc8a949d96f67128c493903, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:07,717 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=22bc5634eea3de71f9212e72bf460c81, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:07,717 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664507717"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664507717"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664507717"}]},"ts":"1689664507717"} 2023-07-18 07:15:07,717 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=ab78d55de4d49c9d3b27c485f00ed06b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:07,717 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664507716"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664507716"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664507716"}]},"ts":"1689664507716"} 2023-07-18 07:15:07,717 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664507716"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664507716"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664507716"}]},"ts":"1689664507716"} 2023-07-18 07:15:07,717 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664507717"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664507717"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664507717"}]},"ts":"1689664507717"} 2023-07-18 07:15:07,717 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664507716"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664507716"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664507716"}]},"ts":"1689664507716"} 2023-07-18 07:15:07,721 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=32, state=RUNNABLE; OpenRegionProcedure 620b80ec7fc8a949d96f67128c493903, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:07,722 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=28, state=RUNNABLE; OpenRegionProcedure ab78d55de4d49c9d3b27c485f00ed06b, server=jenkins-hbase4.apache.org,39465,1689664501221}] 2023-07-18 07:15:07,723 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=27, state=RUNNABLE; OpenRegionProcedure 22bc5634eea3de71f9212e72bf460c81, server=jenkins-hbase4.apache.org,39465,1689664501221}] 2023-07-18 07:15:07,726 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=31, state=RUNNABLE; OpenRegionProcedure 8b608539cf8446d01fa500dcdca355fc, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:07,731 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=26, state=RUNNABLE; OpenRegionProcedure 4c6f754bd010218aa0c0fabc1a8cc990, server=jenkins-hbase4.apache.org,39465,1689664501221}] 2023-07-18 07:15:07,878 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:07,878 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 07:15:07,881 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43790, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 07:15:07,885 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:07,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 620b80ec7fc8a949d96f67128c493903, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 07:15:07,886 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:07,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:07,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 22bc5634eea3de71f9212e72bf460c81, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 07:15:07,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:07,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:07,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:07,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:07,886 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:07,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:07,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:07,888 INFO [StoreOpener-620b80ec7fc8a949d96f67128c493903-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:07,888 INFO [StoreOpener-22bc5634eea3de71f9212e72bf460c81-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:07,890 DEBUG [StoreOpener-620b80ec7fc8a949d96f67128c493903-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903/f 2023-07-18 07:15:07,890 DEBUG [StoreOpener-620b80ec7fc8a949d96f67128c493903-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903/f 2023-07-18 07:15:07,890 DEBUG [StoreOpener-22bc5634eea3de71f9212e72bf460c81-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81/f 2023-07-18 07:15:07,890 DEBUG [StoreOpener-22bc5634eea3de71f9212e72bf460c81-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81/f 2023-07-18 07:15:07,891 INFO [StoreOpener-620b80ec7fc8a949d96f67128c493903-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 620b80ec7fc8a949d96f67128c493903 columnFamilyName f 2023-07-18 07:15:07,891 INFO [StoreOpener-22bc5634eea3de71f9212e72bf460c81-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 22bc5634eea3de71f9212e72bf460c81 columnFamilyName f 2023-07-18 07:15:07,893 INFO [StoreOpener-620b80ec7fc8a949d96f67128c493903-1] regionserver.HStore(310): Store=620b80ec7fc8a949d96f67128c493903/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:07,893 INFO [StoreOpener-22bc5634eea3de71f9212e72bf460c81-1] regionserver.HStore(310): Store=22bc5634eea3de71f9212e72bf460c81/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:07,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:07,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:07,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:07,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:07,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:07,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:07,905 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 22bc5634eea3de71f9212e72bf460c81; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10577856960, jitterRate=-0.014860302209854126}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:07,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 22bc5634eea3de71f9212e72bf460c81: 2023-07-18 07:15:07,907 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 620b80ec7fc8a949d96f67128c493903; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9848356160, jitterRate=-0.08280035853385925}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:07,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 620b80ec7fc8a949d96f67128c493903: 2023-07-18 07:15:07,908 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81., pid=38, masterSystemTime=1689664507878 2023-07-18 07:15:07,908 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903., pid=36, masterSystemTime=1689664507878 2023-07-18 07:15:07,920 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:07,920 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:07,920 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:07,920 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=620b80ec7fc8a949d96f67128c493903, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:07,920 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8b608539cf8446d01fa500dcdca355fc, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 07:15:07,920 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664507920"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664507920"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664507920"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664507920"}]},"ts":"1689664507920"} 2023-07-18 07:15:07,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:07,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:07,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:07,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:07,921 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:07,923 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:07,924 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:07,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ab78d55de4d49c9d3b27c485f00ed06b, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 07:15:07,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:07,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:07,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:07,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:07,928 INFO [StoreOpener-ab78d55de4d49c9d3b27c485f00ed06b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:07,928 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=22bc5634eea3de71f9212e72bf460c81, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:07,928 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664507928"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664507928"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664507928"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664507928"}]},"ts":"1689664507928"} 2023-07-18 07:15:07,933 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=32 2023-07-18 07:15:07,933 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=32, state=SUCCESS; OpenRegionProcedure 620b80ec7fc8a949d96f67128c493903, server=jenkins-hbase4.apache.org,33769,1689664501155 in 208 msec 2023-07-18 07:15:07,935 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=27 2023-07-18 07:15:07,936 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=620b80ec7fc8a949d96f67128c493903, REOPEN/MOVE in 538 msec 2023-07-18 07:15:07,940 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=27, state=SUCCESS; OpenRegionProcedure 22bc5634eea3de71f9212e72bf460c81, server=jenkins-hbase4.apache.org,39465,1689664501221 in 207 msec 2023-07-18 07:15:07,940 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22bc5634eea3de71f9212e72bf460c81, REOPEN/MOVE in 574 msec 2023-07-18 07:15:07,940 INFO [StoreOpener-8b608539cf8446d01fa500dcdca355fc-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:07,941 DEBUG [StoreOpener-ab78d55de4d49c9d3b27c485f00ed06b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b/f 2023-07-18 07:15:07,941 DEBUG [StoreOpener-ab78d55de4d49c9d3b27c485f00ed06b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b/f 2023-07-18 07:15:07,941 INFO [StoreOpener-ab78d55de4d49c9d3b27c485f00ed06b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ab78d55de4d49c9d3b27c485f00ed06b columnFamilyName f 2023-07-18 07:15:07,942 DEBUG [StoreOpener-8b608539cf8446d01fa500dcdca355fc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc/f 2023-07-18 07:15:07,942 DEBUG [StoreOpener-8b608539cf8446d01fa500dcdca355fc-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc/f 2023-07-18 07:15:07,942 INFO [StoreOpener-ab78d55de4d49c9d3b27c485f00ed06b-1] regionserver.HStore(310): Store=ab78d55de4d49c9d3b27c485f00ed06b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:07,944 INFO [StoreOpener-8b608539cf8446d01fa500dcdca355fc-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8b608539cf8446d01fa500dcdca355fc columnFamilyName f 2023-07-18 07:15:07,944 INFO [StoreOpener-8b608539cf8446d01fa500dcdca355fc-1] regionserver.HStore(310): Store=8b608539cf8446d01fa500dcdca355fc/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:07,944 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:07,947 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:07,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:07,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:07,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:07,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:07,954 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8b608539cf8446d01fa500dcdca355fc; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10045725440, jitterRate=-0.06441891193389893}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:07,954 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ab78d55de4d49c9d3b27c485f00ed06b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9504419040, jitterRate=-0.11483199894428253}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:07,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8b608539cf8446d01fa500dcdca355fc: 2023-07-18 07:15:07,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ab78d55de4d49c9d3b27c485f00ed06b: 2023-07-18 07:15:07,955 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc., pid=39, masterSystemTime=1689664507878 2023-07-18 07:15:07,955 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b., pid=37, masterSystemTime=1689664507878 2023-07-18 07:15:07,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:07,957 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:07,958 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=8b608539cf8446d01fa500dcdca355fc, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:07,958 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664507958"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664507958"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664507958"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664507958"}]},"ts":"1689664507958"} 2023-07-18 07:15:07,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:07,958 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:07,958 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:07,958 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4c6f754bd010218aa0c0fabc1a8cc990, NAME => 'Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 07:15:07,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:07,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:07,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:07,959 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:07,959 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=ab78d55de4d49c9d3b27c485f00ed06b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:07,959 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664507959"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664507959"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664507959"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664507959"}]},"ts":"1689664507959"} 2023-07-18 07:15:07,966 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=31 2023-07-18 07:15:07,966 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=28 2023-07-18 07:15:07,966 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=31, state=SUCCESS; OpenRegionProcedure 8b608539cf8446d01fa500dcdca355fc, server=jenkins-hbase4.apache.org,33769,1689664501155 in 234 msec 2023-07-18 07:15:07,966 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=28, state=SUCCESS; OpenRegionProcedure ab78d55de4d49c9d3b27c485f00ed06b, server=jenkins-hbase4.apache.org,39465,1689664501221 in 240 msec 2023-07-18 07:15:07,967 INFO [StoreOpener-4c6f754bd010218aa0c0fabc1a8cc990-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:07,968 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=31, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b608539cf8446d01fa500dcdca355fc, REOPEN/MOVE in 584 msec 2023-07-18 07:15:07,968 DEBUG [StoreOpener-4c6f754bd010218aa0c0fabc1a8cc990-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990/f 2023-07-18 07:15:07,968 DEBUG [StoreOpener-4c6f754bd010218aa0c0fabc1a8cc990-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990/f 2023-07-18 07:15:07,969 INFO [StoreOpener-4c6f754bd010218aa0c0fabc1a8cc990-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4c6f754bd010218aa0c0fabc1a8cc990 columnFamilyName f 2023-07-18 07:15:07,970 INFO [StoreOpener-4c6f754bd010218aa0c0fabc1a8cc990-1] regionserver.HStore(310): Store=4c6f754bd010218aa0c0fabc1a8cc990/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:07,971 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:07,973 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:07,977 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab78d55de4d49c9d3b27c485f00ed06b, REOPEN/MOVE in 601 msec 2023-07-18 07:15:07,977 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:07,978 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4c6f754bd010218aa0c0fabc1a8cc990; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9560762240, jitterRate=-0.10958462953567505}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:07,978 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4c6f754bd010218aa0c0fabc1a8cc990: 2023-07-18 07:15:07,983 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990., pid=40, masterSystemTime=1689664507878 2023-07-18 07:15:08,000 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=4c6f754bd010218aa0c0fabc1a8cc990, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:08,000 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:08,001 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:08,001 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664508000"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664508000"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664508000"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664508000"}]},"ts":"1689664508000"} 2023-07-18 07:15:08,011 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=26 2023-07-18 07:15:08,011 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=26, state=SUCCESS; OpenRegionProcedure 4c6f754bd010218aa0c0fabc1a8cc990, server=jenkins-hbase4.apache.org,39465,1689664501221 in 272 msec 2023-07-18 07:15:08,014 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6f754bd010218aa0c0fabc1a8cc990, REOPEN/MOVE in 653 msec 2023-07-18 07:15:08,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure.ProcedureSyncWait(216): waitFor pid=26 2023-07-18 07:15:08,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_847781082. 2023-07-18 07:15:08,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:08,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:08,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:08,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:08,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:08,416 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:08,423 INFO [Listener at localhost/33473] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:08,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:08,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=41, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:08,443 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664508443"}]},"ts":"1689664508443"} 2023-07-18 07:15:08,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-18 07:15:08,447 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-18 07:15:08,449 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-18 07:15:08,450 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6f754bd010218aa0c0fabc1a8cc990, UNASSIGN}, {pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22bc5634eea3de71f9212e72bf460c81, UNASSIGN}, {pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab78d55de4d49c9d3b27c485f00ed06b, UNASSIGN}, {pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b608539cf8446d01fa500dcdca355fc, UNASSIGN}, {pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=620b80ec7fc8a949d96f67128c493903, UNASSIGN}] 2023-07-18 07:15:08,453 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22bc5634eea3de71f9212e72bf460c81, UNASSIGN 2023-07-18 07:15:08,453 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6f754bd010218aa0c0fabc1a8cc990, UNASSIGN 2023-07-18 07:15:08,453 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab78d55de4d49c9d3b27c485f00ed06b, UNASSIGN 2023-07-18 07:15:08,453 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b608539cf8446d01fa500dcdca355fc, UNASSIGN 2023-07-18 07:15:08,454 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=620b80ec7fc8a949d96f67128c493903, UNASSIGN 2023-07-18 07:15:08,454 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=22bc5634eea3de71f9212e72bf460c81, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:08,454 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=4c6f754bd010218aa0c0fabc1a8cc990, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:08,455 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664508454"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664508454"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664508454"}]},"ts":"1689664508454"} 2023-07-18 07:15:08,455 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664508454"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664508454"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664508454"}]},"ts":"1689664508454"} 2023-07-18 07:15:08,455 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=8b608539cf8446d01fa500dcdca355fc, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:08,455 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=ab78d55de4d49c9d3b27c485f00ed06b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:08,455 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664508455"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664508455"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664508455"}]},"ts":"1689664508455"} 2023-07-18 07:15:08,456 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664508455"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664508455"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664508455"}]},"ts":"1689664508455"} 2023-07-18 07:15:08,456 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=620b80ec7fc8a949d96f67128c493903, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:08,456 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664508456"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664508456"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664508456"}]},"ts":"1689664508456"} 2023-07-18 07:15:08,457 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=43, state=RUNNABLE; CloseRegionProcedure 22bc5634eea3de71f9212e72bf460c81, server=jenkins-hbase4.apache.org,39465,1689664501221}] 2023-07-18 07:15:08,458 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=42, state=RUNNABLE; CloseRegionProcedure 4c6f754bd010218aa0c0fabc1a8cc990, server=jenkins-hbase4.apache.org,39465,1689664501221}] 2023-07-18 07:15:08,464 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=45, state=RUNNABLE; CloseRegionProcedure 8b608539cf8446d01fa500dcdca355fc, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:08,465 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=44, state=RUNNABLE; CloseRegionProcedure ab78d55de4d49c9d3b27c485f00ed06b, server=jenkins-hbase4.apache.org,39465,1689664501221}] 2023-07-18 07:15:08,466 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=46, state=RUNNABLE; CloseRegionProcedure 620b80ec7fc8a949d96f67128c493903, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:08,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-18 07:15:08,615 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:08,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ab78d55de4d49c9d3b27c485f00ed06b, disabling compactions & flushes 2023-07-18 07:15:08,616 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:08,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:08,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. after waiting 0 ms 2023-07-18 07:15:08,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:08,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:08,621 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 620b80ec7fc8a949d96f67128c493903, disabling compactions & flushes 2023-07-18 07:15:08,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:08,621 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:08,621 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. after waiting 0 ms 2023-07-18 07:15:08,621 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:08,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 07:15:08,629 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b. 2023-07-18 07:15:08,629 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ab78d55de4d49c9d3b27c485f00ed06b: 2023-07-18 07:15:08,633 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 07:15:08,635 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903. 2023-07-18 07:15:08,635 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 620b80ec7fc8a949d96f67128c493903: 2023-07-18 07:15:08,635 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:08,635 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:08,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 22bc5634eea3de71f9212e72bf460c81, disabling compactions & flushes 2023-07-18 07:15:08,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:08,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:08,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. after waiting 0 ms 2023-07-18 07:15:08,636 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:08,636 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=ab78d55de4d49c9d3b27c485f00ed06b, regionState=CLOSED 2023-07-18 07:15:08,637 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664508636"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664508636"}]},"ts":"1689664508636"} 2023-07-18 07:15:08,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:08,638 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:08,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8b608539cf8446d01fa500dcdca355fc, disabling compactions & flushes 2023-07-18 07:15:08,639 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:08,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:08,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. after waiting 0 ms 2023-07-18 07:15:08,639 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:08,640 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=620b80ec7fc8a949d96f67128c493903, regionState=CLOSED 2023-07-18 07:15:08,640 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664508640"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664508640"}]},"ts":"1689664508640"} 2023-07-18 07:15:08,647 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 07:15:08,648 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81. 2023-07-18 07:15:08,649 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 22bc5634eea3de71f9212e72bf460c81: 2023-07-18 07:15:08,649 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=44 2023-07-18 07:15:08,649 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=44, state=SUCCESS; CloseRegionProcedure ab78d55de4d49c9d3b27c485f00ed06b, server=jenkins-hbase4.apache.org,39465,1689664501221 in 175 msec 2023-07-18 07:15:08,651 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=46 2023-07-18 07:15:08,651 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=46, state=SUCCESS; CloseRegionProcedure 620b80ec7fc8a949d96f67128c493903, server=jenkins-hbase4.apache.org,33769,1689664501155 in 178 msec 2023-07-18 07:15:08,658 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 07:15:08,659 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ab78d55de4d49c9d3b27c485f00ed06b, UNASSIGN in 199 msec 2023-07-18 07:15:08,659 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc. 2023-07-18 07:15:08,659 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8b608539cf8446d01fa500dcdca355fc: 2023-07-18 07:15:08,660 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=22bc5634eea3de71f9212e72bf460c81, regionState=CLOSED 2023-07-18 07:15:08,661 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664508660"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664508660"}]},"ts":"1689664508660"} 2023-07-18 07:15:08,661 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=620b80ec7fc8a949d96f67128c493903, UNASSIGN in 201 msec 2023-07-18 07:15:08,661 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:08,662 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:08,663 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4c6f754bd010218aa0c0fabc1a8cc990, disabling compactions & flushes 2023-07-18 07:15:08,664 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=8b608539cf8446d01fa500dcdca355fc, regionState=CLOSED 2023-07-18 07:15:08,663 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:08,664 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664508664"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664508664"}]},"ts":"1689664508664"} 2023-07-18 07:15:08,665 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:08,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:08,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. after waiting 0 ms 2023-07-18 07:15:08,665 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:08,667 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=43 2023-07-18 07:15:08,667 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=43, state=SUCCESS; CloseRegionProcedure 22bc5634eea3de71f9212e72bf460c81, server=jenkins-hbase4.apache.org,39465,1689664501221 in 207 msec 2023-07-18 07:15:08,669 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=45 2023-07-18 07:15:08,669 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=22bc5634eea3de71f9212e72bf460c81, UNASSIGN in 217 msec 2023-07-18 07:15:08,669 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=45, state=SUCCESS; CloseRegionProcedure 8b608539cf8446d01fa500dcdca355fc, server=jenkins-hbase4.apache.org,33769,1689664501155 in 204 msec 2023-07-18 07:15:08,672 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8b608539cf8446d01fa500dcdca355fc, UNASSIGN in 219 msec 2023-07-18 07:15:08,678 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 07:15:08,680 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990. 2023-07-18 07:15:08,680 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4c6f754bd010218aa0c0fabc1a8cc990: 2023-07-18 07:15:08,682 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:08,683 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=4c6f754bd010218aa0c0fabc1a8cc990, regionState=CLOSED 2023-07-18 07:15:08,683 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664508683"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664508683"}]},"ts":"1689664508683"} 2023-07-18 07:15:08,687 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=42 2023-07-18 07:15:08,687 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=42, state=SUCCESS; CloseRegionProcedure 4c6f754bd010218aa0c0fabc1a8cc990, server=jenkins-hbase4.apache.org,39465,1689664501221 in 227 msec 2023-07-18 07:15:08,689 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=41 2023-07-18 07:15:08,690 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4c6f754bd010218aa0c0fabc1a8cc990, UNASSIGN in 237 msec 2023-07-18 07:15:08,691 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664508690"}]},"ts":"1689664508690"} 2023-07-18 07:15:08,692 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-18 07:15:08,694 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-18 07:15:08,697 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=41, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 264 msec 2023-07-18 07:15:08,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-18 07:15:08,751 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 41 completed 2023-07-18 07:15:08,753 INFO [Listener at localhost/33473] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:08,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:08,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=52, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-18 07:15:08,771 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-18 07:15:08,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-18 07:15:08,785 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:08,785 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:08,785 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:08,785 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:08,785 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:08,789 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81/recovered.edits] 2023-07-18 07:15:08,790 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903/recovered.edits] 2023-07-18 07:15:08,790 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc/recovered.edits] 2023-07-18 07:15:08,796 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b/recovered.edits] 2023-07-18 07:15:08,798 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990/recovered.edits] 2023-07-18 07:15:08,810 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903/recovered.edits/7.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903/recovered.edits/7.seqid 2023-07-18 07:15:08,812 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81/recovered.edits/7.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81/recovered.edits/7.seqid 2023-07-18 07:15:08,812 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/620b80ec7fc8a949d96f67128c493903 2023-07-18 07:15:08,812 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc/recovered.edits/7.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc/recovered.edits/7.seqid 2023-07-18 07:15:08,813 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/22bc5634eea3de71f9212e72bf460c81 2023-07-18 07:15:08,813 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8b608539cf8446d01fa500dcdca355fc 2023-07-18 07:15:08,814 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b/recovered.edits/7.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b/recovered.edits/7.seqid 2023-07-18 07:15:08,815 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ab78d55de4d49c9d3b27c485f00ed06b 2023-07-18 07:15:08,817 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990/recovered.edits/7.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990/recovered.edits/7.seqid 2023-07-18 07:15:08,818 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4c6f754bd010218aa0c0fabc1a8cc990 2023-07-18 07:15:08,818 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 07:15:08,866 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-18 07:15:08,870 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-18 07:15:08,870 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-18 07:15:08,871 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664508871"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:08,871 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664508871"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:08,871 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664508871"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:08,871 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664508871"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:08,871 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664508871"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:08,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-18 07:15:08,875 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 07:15:08,875 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 4c6f754bd010218aa0c0fabc1a8cc990, NAME => 'Group_testTableMoveTruncateAndDrop,,1689664506135.4c6f754bd010218aa0c0fabc1a8cc990.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 22bc5634eea3de71f9212e72bf460c81, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689664506135.22bc5634eea3de71f9212e72bf460c81.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => ab78d55de4d49c9d3b27c485f00ed06b, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664506135.ab78d55de4d49c9d3b27c485f00ed06b.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 8b608539cf8446d01fa500dcdca355fc, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664506135.8b608539cf8446d01fa500dcdca355fc.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 620b80ec7fc8a949d96f67128c493903, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689664506135.620b80ec7fc8a949d96f67128c493903.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 07:15:08,875 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-18 07:15:08,875 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689664508875"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:08,882 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-18 07:15:08,890 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e77138064ccac5bec6f5b87b2a1330e6 2023-07-18 07:15:08,891 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3376d88003deb7180ffb05b158ef1b8c 2023-07-18 07:15:08,891 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7811282a420a2f12fa3ce22caa1deea9 2023-07-18 07:15:08,890 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ea4ab481bbf257ccae5e4a80bbc46fb8 2023-07-18 07:15:08,890 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0d0a70f6a7d6920cf5c3d5871364b005 2023-07-18 07:15:08,892 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3376d88003deb7180ffb05b158ef1b8c empty. 2023-07-18 07:15:08,892 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7811282a420a2f12fa3ce22caa1deea9 empty. 2023-07-18 07:15:08,892 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0d0a70f6a7d6920cf5c3d5871364b005 empty. 2023-07-18 07:15:08,892 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e77138064ccac5bec6f5b87b2a1330e6 empty. 2023-07-18 07:15:08,892 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ea4ab481bbf257ccae5e4a80bbc46fb8 empty. 2023-07-18 07:15:08,892 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3376d88003deb7180ffb05b158ef1b8c 2023-07-18 07:15:08,893 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7811282a420a2f12fa3ce22caa1deea9 2023-07-18 07:15:08,893 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ea4ab481bbf257ccae5e4a80bbc46fb8 2023-07-18 07:15:08,893 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e77138064ccac5bec6f5b87b2a1330e6 2023-07-18 07:15:08,893 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0d0a70f6a7d6920cf5c3d5871364b005 2023-07-18 07:15:08,894 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 07:15:08,923 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:08,925 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => e77138064ccac5bec6f5b87b2a1330e6, NAME => 'Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:08,927 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 0d0a70f6a7d6920cf5c3d5871364b005, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:08,929 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => ea4ab481bbf257ccae5e4a80bbc46fb8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:08,995 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:08,995 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing e77138064ccac5bec6f5b87b2a1330e6, disabling compactions & flushes 2023-07-18 07:15:08,995 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6. 2023-07-18 07:15:08,995 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6. 2023-07-18 07:15:08,995 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6. after waiting 0 ms 2023-07-18 07:15:08,995 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6. 2023-07-18 07:15:08,995 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6. 2023-07-18 07:15:08,995 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for e77138064ccac5bec6f5b87b2a1330e6: 2023-07-18 07:15:09,003 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7811282a420a2f12fa3ce22caa1deea9, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:09,014 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:09,015 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing ea4ab481bbf257ccae5e4a80bbc46fb8, disabling compactions & flushes 2023-07-18 07:15:09,015 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8. 2023-07-18 07:15:09,015 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8. 2023-07-18 07:15:09,015 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8. after waiting 0 ms 2023-07-18 07:15:09,015 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8. 2023-07-18 07:15:09,015 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8. 2023-07-18 07:15:09,015 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for ea4ab481bbf257ccae5e4a80bbc46fb8: 2023-07-18 07:15:09,016 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 3376d88003deb7180ffb05b158ef1b8c, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:09,024 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:09,024 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 0d0a70f6a7d6920cf5c3d5871364b005, disabling compactions & flushes 2023-07-18 07:15:09,025 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005. 2023-07-18 07:15:09,025 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005. 2023-07-18 07:15:09,025 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005. after waiting 0 ms 2023-07-18 07:15:09,025 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005. 2023-07-18 07:15:09,025 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005. 2023-07-18 07:15:09,025 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 0d0a70f6a7d6920cf5c3d5871364b005: 2023-07-18 07:15:09,052 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:09,052 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 7811282a420a2f12fa3ce22caa1deea9, disabling compactions & flushes 2023-07-18 07:15:09,052 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9. 2023-07-18 07:15:09,052 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9. 2023-07-18 07:15:09,053 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9. after waiting 0 ms 2023-07-18 07:15:09,053 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9. 2023-07-18 07:15:09,053 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9. 2023-07-18 07:15:09,053 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 7811282a420a2f12fa3ce22caa1deea9: 2023-07-18 07:15:09,064 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:09,065 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 3376d88003deb7180ffb05b158ef1b8c, disabling compactions & flushes 2023-07-18 07:15:09,065 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c. 2023-07-18 07:15:09,065 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c. 2023-07-18 07:15:09,065 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c. after waiting 0 ms 2023-07-18 07:15:09,065 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c. 2023-07-18 07:15:09,065 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c. 2023-07-18 07:15:09,065 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 3376d88003deb7180ffb05b158ef1b8c: 2023-07-18 07:15:09,069 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664509069"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664509069"}]},"ts":"1689664509069"} 2023-07-18 07:15:09,069 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664509069"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664509069"}]},"ts":"1689664509069"} 2023-07-18 07:15:09,069 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664509069"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664509069"}]},"ts":"1689664509069"} 2023-07-18 07:15:09,070 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664509069"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664509069"}]},"ts":"1689664509069"} 2023-07-18 07:15:09,070 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664509069"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664509069"}]},"ts":"1689664509069"} 2023-07-18 07:15:09,073 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 07:15:09,075 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664509075"}]},"ts":"1689664509075"} 2023-07-18 07:15:09,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-18 07:15:09,077 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-18 07:15:09,082 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:09,083 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:09,083 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:09,083 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:09,085 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e77138064ccac5bec6f5b87b2a1330e6, ASSIGN}, {pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0d0a70f6a7d6920cf5c3d5871364b005, ASSIGN}, {pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ea4ab481bbf257ccae5e4a80bbc46fb8, ASSIGN}, {pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7811282a420a2f12fa3ce22caa1deea9, ASSIGN}, {pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3376d88003deb7180ffb05b158ef1b8c, ASSIGN}] 2023-07-18 07:15:09,087 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0d0a70f6a7d6920cf5c3d5871364b005, ASSIGN 2023-07-18 07:15:09,087 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e77138064ccac5bec6f5b87b2a1330e6, ASSIGN 2023-07-18 07:15:09,087 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ea4ab481bbf257ccae5e4a80bbc46fb8, ASSIGN 2023-07-18 07:15:09,088 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7811282a420a2f12fa3ce22caa1deea9, ASSIGN 2023-07-18 07:15:09,088 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3376d88003deb7180ffb05b158ef1b8c, ASSIGN 2023-07-18 07:15:09,089 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0d0a70f6a7d6920cf5c3d5871364b005, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33769,1689664501155; forceNewPlan=false, retain=false 2023-07-18 07:15:09,089 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e77138064ccac5bec6f5b87b2a1330e6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33769,1689664501155; forceNewPlan=false, retain=false 2023-07-18 07:15:09,089 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7811282a420a2f12fa3ce22caa1deea9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39465,1689664501221; forceNewPlan=false, retain=false 2023-07-18 07:15:09,089 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3376d88003deb7180ffb05b158ef1b8c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33769,1689664501155; forceNewPlan=false, retain=false 2023-07-18 07:15:09,089 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ea4ab481bbf257ccae5e4a80bbc46fb8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39465,1689664501221; forceNewPlan=false, retain=false 2023-07-18 07:15:09,112 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 07:15:09,175 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 07:15:09,176 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-18 07:15:09,176 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 07:15:09,176 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-18 07:15:09,177 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 07:15:09,177 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-18 07:15:09,178 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 07:15:09,179 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-18 07:15:09,239 INFO [jenkins-hbase4:33141] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 07:15:09,243 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=7811282a420a2f12fa3ce22caa1deea9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:09,243 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=e77138064ccac5bec6f5b87b2a1330e6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:09,243 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=3376d88003deb7180ffb05b158ef1b8c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:09,243 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664509243"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664509243"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664509243"}]},"ts":"1689664509243"} 2023-07-18 07:15:09,243 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=0d0a70f6a7d6920cf5c3d5871364b005, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:09,243 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664509243"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664509243"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664509243"}]},"ts":"1689664509243"} 2023-07-18 07:15:09,243 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=ea4ab481bbf257ccae5e4a80bbc46fb8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:09,243 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664509243"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664509243"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664509243"}]},"ts":"1689664509243"} 2023-07-18 07:15:09,244 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664509243"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664509243"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664509243"}]},"ts":"1689664509243"} 2023-07-18 07:15:09,243 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664509243"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664509243"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664509243"}]},"ts":"1689664509243"} 2023-07-18 07:15:09,252 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=53, state=RUNNABLE; OpenRegionProcedure e77138064ccac5bec6f5b87b2a1330e6, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:09,254 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=54, state=RUNNABLE; OpenRegionProcedure 0d0a70f6a7d6920cf5c3d5871364b005, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:09,255 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=57, state=RUNNABLE; OpenRegionProcedure 3376d88003deb7180ffb05b158ef1b8c, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:09,257 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=55, state=RUNNABLE; OpenRegionProcedure ea4ab481bbf257ccae5e4a80bbc46fb8, server=jenkins-hbase4.apache.org,39465,1689664501221}] 2023-07-18 07:15:09,260 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=56, state=RUNNABLE; OpenRegionProcedure 7811282a420a2f12fa3ce22caa1deea9, server=jenkins-hbase4.apache.org,39465,1689664501221}] 2023-07-18 07:15:09,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-18 07:15:09,413 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6. 2023-07-18 07:15:09,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e77138064ccac5bec6f5b87b2a1330e6, NAME => 'Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 07:15:09,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e77138064ccac5bec6f5b87b2a1330e6 2023-07-18 07:15:09,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:09,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e77138064ccac5bec6f5b87b2a1330e6 2023-07-18 07:15:09,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e77138064ccac5bec6f5b87b2a1330e6 2023-07-18 07:15:09,415 INFO [StoreOpener-e77138064ccac5bec6f5b87b2a1330e6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e77138064ccac5bec6f5b87b2a1330e6 2023-07-18 07:15:09,417 DEBUG [StoreOpener-e77138064ccac5bec6f5b87b2a1330e6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/e77138064ccac5bec6f5b87b2a1330e6/f 2023-07-18 07:15:09,418 DEBUG [StoreOpener-e77138064ccac5bec6f5b87b2a1330e6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/e77138064ccac5bec6f5b87b2a1330e6/f 2023-07-18 07:15:09,418 INFO [StoreOpener-e77138064ccac5bec6f5b87b2a1330e6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e77138064ccac5bec6f5b87b2a1330e6 columnFamilyName f 2023-07-18 07:15:09,419 INFO [StoreOpener-e77138064ccac5bec6f5b87b2a1330e6-1] regionserver.HStore(310): Store=e77138064ccac5bec6f5b87b2a1330e6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:09,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/e77138064ccac5bec6f5b87b2a1330e6 2023-07-18 07:15:09,425 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8. 2023-07-18 07:15:09,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ea4ab481bbf257ccae5e4a80bbc46fb8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 07:15:09,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop ea4ab481bbf257ccae5e4a80bbc46fb8 2023-07-18 07:15:09,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:09,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ea4ab481bbf257ccae5e4a80bbc46fb8 2023-07-18 07:15:09,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ea4ab481bbf257ccae5e4a80bbc46fb8 2023-07-18 07:15:09,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/e77138064ccac5bec6f5b87b2a1330e6 2023-07-18 07:15:09,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e77138064ccac5bec6f5b87b2a1330e6 2023-07-18 07:15:09,431 INFO [StoreOpener-ea4ab481bbf257ccae5e4a80bbc46fb8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ea4ab481bbf257ccae5e4a80bbc46fb8 2023-07-18 07:15:09,432 DEBUG [StoreOpener-ea4ab481bbf257ccae5e4a80bbc46fb8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ea4ab481bbf257ccae5e4a80bbc46fb8/f 2023-07-18 07:15:09,433 DEBUG [StoreOpener-ea4ab481bbf257ccae5e4a80bbc46fb8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ea4ab481bbf257ccae5e4a80bbc46fb8/f 2023-07-18 07:15:09,433 INFO [StoreOpener-ea4ab481bbf257ccae5e4a80bbc46fb8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ea4ab481bbf257ccae5e4a80bbc46fb8 columnFamilyName f 2023-07-18 07:15:09,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/e77138064ccac5bec6f5b87b2a1330e6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:09,434 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e77138064ccac5bec6f5b87b2a1330e6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10500310560, jitterRate=-0.0220823734998703}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:09,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e77138064ccac5bec6f5b87b2a1330e6: 2023-07-18 07:15:09,436 INFO [StoreOpener-ea4ab481bbf257ccae5e4a80bbc46fb8-1] regionserver.HStore(310): Store=ea4ab481bbf257ccae5e4a80bbc46fb8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:09,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ea4ab481bbf257ccae5e4a80bbc46fb8 2023-07-18 07:15:09,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ea4ab481bbf257ccae5e4a80bbc46fb8 2023-07-18 07:15:09,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ea4ab481bbf257ccae5e4a80bbc46fb8 2023-07-18 07:15:09,447 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6., pid=58, masterSystemTime=1689664509407 2023-07-18 07:15:09,448 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ea4ab481bbf257ccae5e4a80bbc46fb8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:09,450 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ea4ab481bbf257ccae5e4a80bbc46fb8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10992197920, jitterRate=0.023728206753730774}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:09,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ea4ab481bbf257ccae5e4a80bbc46fb8: 2023-07-18 07:15:09,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6. 2023-07-18 07:15:09,451 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6. 2023-07-18 07:15:09,451 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c. 2023-07-18 07:15:09,451 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3376d88003deb7180ffb05b158ef1b8c, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 07:15:09,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3376d88003deb7180ffb05b158ef1b8c 2023-07-18 07:15:09,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:09,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3376d88003deb7180ffb05b158ef1b8c 2023-07-18 07:15:09,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3376d88003deb7180ffb05b158ef1b8c 2023-07-18 07:15:09,453 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8., pid=61, masterSystemTime=1689664509421 2023-07-18 07:15:09,453 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=e77138064ccac5bec6f5b87b2a1330e6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:09,454 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664509453"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664509453"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664509453"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664509453"}]},"ts":"1689664509453"} 2023-07-18 07:15:09,456 INFO [StoreOpener-3376d88003deb7180ffb05b158ef1b8c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3376d88003deb7180ffb05b158ef1b8c 2023-07-18 07:15:09,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8. 2023-07-18 07:15:09,456 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8. 2023-07-18 07:15:09,456 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9. 2023-07-18 07:15:09,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7811282a420a2f12fa3ce22caa1deea9, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 07:15:09,457 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7811282a420a2f12fa3ce22caa1deea9 2023-07-18 07:15:09,457 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:09,457 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7811282a420a2f12fa3ce22caa1deea9 2023-07-18 07:15:09,457 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7811282a420a2f12fa3ce22caa1deea9 2023-07-18 07:15:09,457 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=ea4ab481bbf257ccae5e4a80bbc46fb8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:09,458 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664509457"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664509457"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664509457"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664509457"}]},"ts":"1689664509457"} 2023-07-18 07:15:09,459 DEBUG [StoreOpener-3376d88003deb7180ffb05b158ef1b8c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/3376d88003deb7180ffb05b158ef1b8c/f 2023-07-18 07:15:09,459 DEBUG [StoreOpener-3376d88003deb7180ffb05b158ef1b8c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/3376d88003deb7180ffb05b158ef1b8c/f 2023-07-18 07:15:09,461 INFO [StoreOpener-3376d88003deb7180ffb05b158ef1b8c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3376d88003deb7180ffb05b158ef1b8c columnFamilyName f 2023-07-18 07:15:09,462 INFO [StoreOpener-3376d88003deb7180ffb05b158ef1b8c-1] regionserver.HStore(310): Store=3376d88003deb7180ffb05b158ef1b8c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:09,462 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=53 2023-07-18 07:15:09,462 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=53, state=SUCCESS; OpenRegionProcedure e77138064ccac5bec6f5b87b2a1330e6, server=jenkins-hbase4.apache.org,33769,1689664501155 in 206 msec 2023-07-18 07:15:09,463 INFO [StoreOpener-7811282a420a2f12fa3ce22caa1deea9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7811282a420a2f12fa3ce22caa1deea9 2023-07-18 07:15:09,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/3376d88003deb7180ffb05b158ef1b8c 2023-07-18 07:15:09,465 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/3376d88003deb7180ffb05b158ef1b8c 2023-07-18 07:15:09,465 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=55 2023-07-18 07:15:09,465 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e77138064ccac5bec6f5b87b2a1330e6, ASSIGN in 379 msec 2023-07-18 07:15:09,465 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=55, state=SUCCESS; OpenRegionProcedure ea4ab481bbf257ccae5e4a80bbc46fb8, server=jenkins-hbase4.apache.org,39465,1689664501221 in 204 msec 2023-07-18 07:15:09,466 DEBUG [StoreOpener-7811282a420a2f12fa3ce22caa1deea9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/7811282a420a2f12fa3ce22caa1deea9/f 2023-07-18 07:15:09,466 DEBUG [StoreOpener-7811282a420a2f12fa3ce22caa1deea9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/7811282a420a2f12fa3ce22caa1deea9/f 2023-07-18 07:15:09,467 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ea4ab481bbf257ccae5e4a80bbc46fb8, ASSIGN in 380 msec 2023-07-18 07:15:09,468 INFO [StoreOpener-7811282a420a2f12fa3ce22caa1deea9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7811282a420a2f12fa3ce22caa1deea9 columnFamilyName f 2023-07-18 07:15:09,468 INFO [StoreOpener-7811282a420a2f12fa3ce22caa1deea9-1] regionserver.HStore(310): Store=7811282a420a2f12fa3ce22caa1deea9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:09,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3376d88003deb7180ffb05b158ef1b8c 2023-07-18 07:15:09,469 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/7811282a420a2f12fa3ce22caa1deea9 2023-07-18 07:15:09,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/7811282a420a2f12fa3ce22caa1deea9 2023-07-18 07:15:09,475 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/3376d88003deb7180ffb05b158ef1b8c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:09,475 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7811282a420a2f12fa3ce22caa1deea9 2023-07-18 07:15:09,475 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3376d88003deb7180ffb05b158ef1b8c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10908524800, jitterRate=0.015935540199279785}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:09,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3376d88003deb7180ffb05b158ef1b8c: 2023-07-18 07:15:09,477 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c., pid=60, masterSystemTime=1689664509407 2023-07-18 07:15:09,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c. 2023-07-18 07:15:09,481 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c. 2023-07-18 07:15:09,481 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005. 2023-07-18 07:15:09,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0d0a70f6a7d6920cf5c3d5871364b005, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 07:15:09,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 0d0a70f6a7d6920cf5c3d5871364b005 2023-07-18 07:15:09,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:09,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0d0a70f6a7d6920cf5c3d5871364b005 2023-07-18 07:15:09,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0d0a70f6a7d6920cf5c3d5871364b005 2023-07-18 07:15:09,483 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=3376d88003deb7180ffb05b158ef1b8c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:09,483 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664509482"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664509482"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664509482"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664509482"}]},"ts":"1689664509482"} 2023-07-18 07:15:09,495 INFO [StoreOpener-0d0a70f6a7d6920cf5c3d5871364b005-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0d0a70f6a7d6920cf5c3d5871364b005 2023-07-18 07:15:09,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/7811282a420a2f12fa3ce22caa1deea9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:09,498 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7811282a420a2f12fa3ce22caa1deea9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9741658880, jitterRate=-0.09273731708526611}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:09,498 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7811282a420a2f12fa3ce22caa1deea9: 2023-07-18 07:15:09,498 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=57 2023-07-18 07:15:09,498 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=57, state=SUCCESS; OpenRegionProcedure 3376d88003deb7180ffb05b158ef1b8c, server=jenkins-hbase4.apache.org,33769,1689664501155 in 234 msec 2023-07-18 07:15:09,500 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9., pid=62, masterSystemTime=1689664509421 2023-07-18 07:15:09,500 DEBUG [StoreOpener-0d0a70f6a7d6920cf5c3d5871364b005-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/0d0a70f6a7d6920cf5c3d5871364b005/f 2023-07-18 07:15:09,500 DEBUG [StoreOpener-0d0a70f6a7d6920cf5c3d5871364b005-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/0d0a70f6a7d6920cf5c3d5871364b005/f 2023-07-18 07:15:09,500 INFO [StoreOpener-0d0a70f6a7d6920cf5c3d5871364b005-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0d0a70f6a7d6920cf5c3d5871364b005 columnFamilyName f 2023-07-18 07:15:09,503 INFO [StoreOpener-0d0a70f6a7d6920cf5c3d5871364b005-1] regionserver.HStore(310): Store=0d0a70f6a7d6920cf5c3d5871364b005/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:09,504 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/0d0a70f6a7d6920cf5c3d5871364b005 2023-07-18 07:15:09,504 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3376d88003deb7180ffb05b158ef1b8c, ASSIGN in 413 msec 2023-07-18 07:15:09,505 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=7811282a420a2f12fa3ce22caa1deea9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:09,505 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664509504"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664509504"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664509504"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664509504"}]},"ts":"1689664509504"} 2023-07-18 07:15:09,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/0d0a70f6a7d6920cf5c3d5871364b005 2023-07-18 07:15:09,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9. 2023-07-18 07:15:09,506 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9. 2023-07-18 07:15:09,509 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0d0a70f6a7d6920cf5c3d5871364b005 2023-07-18 07:15:09,513 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=56 2023-07-18 07:15:09,513 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=56, state=SUCCESS; OpenRegionProcedure 7811282a420a2f12fa3ce22caa1deea9, server=jenkins-hbase4.apache.org,39465,1689664501221 in 247 msec 2023-07-18 07:15:09,513 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/0d0a70f6a7d6920cf5c3d5871364b005/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:09,514 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0d0a70f6a7d6920cf5c3d5871364b005; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9891682720, jitterRate=-0.07876525819301605}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:09,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0d0a70f6a7d6920cf5c3d5871364b005: 2023-07-18 07:15:09,515 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005., pid=59, masterSystemTime=1689664509407 2023-07-18 07:15:09,515 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7811282a420a2f12fa3ce22caa1deea9, ASSIGN in 428 msec 2023-07-18 07:15:09,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005. 2023-07-18 07:15:09,518 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005. 2023-07-18 07:15:09,519 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=0d0a70f6a7d6920cf5c3d5871364b005, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:09,519 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664509519"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664509519"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664509519"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664509519"}]},"ts":"1689664509519"} 2023-07-18 07:15:09,524 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=54 2023-07-18 07:15:09,524 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=54, state=SUCCESS; OpenRegionProcedure 0d0a70f6a7d6920cf5c3d5871364b005, server=jenkins-hbase4.apache.org,33769,1689664501155 in 267 msec 2023-07-18 07:15:09,527 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=52 2023-07-18 07:15:09,527 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0d0a70f6a7d6920cf5c3d5871364b005, ASSIGN in 439 msec 2023-07-18 07:15:09,527 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664509527"}]},"ts":"1689664509527"} 2023-07-18 07:15:09,529 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-18 07:15:09,532 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-18 07:15:09,534 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 771 msec 2023-07-18 07:15:09,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-18 07:15:09,879 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 52 completed 2023-07-18 07:15:09,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:09,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:09,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:09,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:09,884 INFO [Listener at localhost/33473] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:09,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:09,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=63, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:09,898 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664509898"}]},"ts":"1689664509898"} 2023-07-18 07:15:09,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-18 07:15:09,900 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-18 07:15:09,902 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-18 07:15:09,907 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e77138064ccac5bec6f5b87b2a1330e6, UNASSIGN}, {pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0d0a70f6a7d6920cf5c3d5871364b005, UNASSIGN}, {pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ea4ab481bbf257ccae5e4a80bbc46fb8, UNASSIGN}, {pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7811282a420a2f12fa3ce22caa1deea9, UNASSIGN}, {pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3376d88003deb7180ffb05b158ef1b8c, UNASSIGN}] 2023-07-18 07:15:09,909 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0d0a70f6a7d6920cf5c3d5871364b005, UNASSIGN 2023-07-18 07:15:09,909 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e77138064ccac5bec6f5b87b2a1330e6, UNASSIGN 2023-07-18 07:15:09,909 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3376d88003deb7180ffb05b158ef1b8c, UNASSIGN 2023-07-18 07:15:09,909 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7811282a420a2f12fa3ce22caa1deea9, UNASSIGN 2023-07-18 07:15:09,910 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ea4ab481bbf257ccae5e4a80bbc46fb8, UNASSIGN 2023-07-18 07:15:09,910 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=0d0a70f6a7d6920cf5c3d5871364b005, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:09,911 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=e77138064ccac5bec6f5b87b2a1330e6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:09,911 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664509910"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664509910"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664509910"}]},"ts":"1689664509910"} 2023-07-18 07:15:09,911 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664509910"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664509910"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664509910"}]},"ts":"1689664509910"} 2023-07-18 07:15:09,911 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=3376d88003deb7180ffb05b158ef1b8c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:09,911 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664509911"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664509911"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664509911"}]},"ts":"1689664509911"} 2023-07-18 07:15:09,911 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=7811282a420a2f12fa3ce22caa1deea9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:09,911 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=ea4ab481bbf257ccae5e4a80bbc46fb8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:09,911 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664509911"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664509911"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664509911"}]},"ts":"1689664509911"} 2023-07-18 07:15:09,912 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664509911"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664509911"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664509911"}]},"ts":"1689664509911"} 2023-07-18 07:15:09,913 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=64, state=RUNNABLE; CloseRegionProcedure e77138064ccac5bec6f5b87b2a1330e6, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:09,914 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=65, state=RUNNABLE; CloseRegionProcedure 0d0a70f6a7d6920cf5c3d5871364b005, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:09,915 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=68, state=RUNNABLE; CloseRegionProcedure 3376d88003deb7180ffb05b158ef1b8c, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:09,918 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=67, state=RUNNABLE; CloseRegionProcedure 7811282a420a2f12fa3ce22caa1deea9, server=jenkins-hbase4.apache.org,39465,1689664501221}] 2023-07-18 07:15:09,919 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=66, state=RUNNABLE; CloseRegionProcedure ea4ab481bbf257ccae5e4a80bbc46fb8, server=jenkins-hbase4.apache.org,39465,1689664501221}] 2023-07-18 07:15:10,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-18 07:15:10,067 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0d0a70f6a7d6920cf5c3d5871364b005 2023-07-18 07:15:10,069 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0d0a70f6a7d6920cf5c3d5871364b005, disabling compactions & flushes 2023-07-18 07:15:10,069 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005. 2023-07-18 07:15:10,069 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005. 2023-07-18 07:15:10,069 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005. after waiting 0 ms 2023-07-18 07:15:10,069 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005. 2023-07-18 07:15:10,070 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7811282a420a2f12fa3ce22caa1deea9 2023-07-18 07:15:10,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7811282a420a2f12fa3ce22caa1deea9, disabling compactions & flushes 2023-07-18 07:15:10,072 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9. 2023-07-18 07:15:10,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9. 2023-07-18 07:15:10,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9. after waiting 0 ms 2023-07-18 07:15:10,072 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9. 2023-07-18 07:15:10,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/0d0a70f6a7d6920cf5c3d5871364b005/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:10,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005. 2023-07-18 07:15:10,091 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0d0a70f6a7d6920cf5c3d5871364b005: 2023-07-18 07:15:10,094 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0d0a70f6a7d6920cf5c3d5871364b005 2023-07-18 07:15:10,094 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e77138064ccac5bec6f5b87b2a1330e6 2023-07-18 07:15:10,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e77138064ccac5bec6f5b87b2a1330e6, disabling compactions & flushes 2023-07-18 07:15:10,095 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6. 2023-07-18 07:15:10,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6. 2023-07-18 07:15:10,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6. after waiting 0 ms 2023-07-18 07:15:10,095 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6. 2023-07-18 07:15:10,096 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=0d0a70f6a7d6920cf5c3d5871364b005, regionState=CLOSED 2023-07-18 07:15:10,096 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664510096"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664510096"}]},"ts":"1689664510096"} 2023-07-18 07:15:10,102 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=65 2023-07-18 07:15:10,102 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=65, state=SUCCESS; CloseRegionProcedure 0d0a70f6a7d6920cf5c3d5871364b005, server=jenkins-hbase4.apache.org,33769,1689664501155 in 184 msec 2023-07-18 07:15:10,106 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0d0a70f6a7d6920cf5c3d5871364b005, UNASSIGN in 195 msec 2023-07-18 07:15:10,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/e77138064ccac5bec6f5b87b2a1330e6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:10,116 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/7811282a420a2f12fa3ce22caa1deea9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:10,117 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6. 2023-07-18 07:15:10,117 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e77138064ccac5bec6f5b87b2a1330e6: 2023-07-18 07:15:10,118 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9. 2023-07-18 07:15:10,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7811282a420a2f12fa3ce22caa1deea9: 2023-07-18 07:15:10,121 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e77138064ccac5bec6f5b87b2a1330e6 2023-07-18 07:15:10,121 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3376d88003deb7180ffb05b158ef1b8c 2023-07-18 07:15:10,123 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=e77138064ccac5bec6f5b87b2a1330e6, regionState=CLOSED 2023-07-18 07:15:10,123 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664510122"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664510122"}]},"ts":"1689664510122"} 2023-07-18 07:15:10,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7811282a420a2f12fa3ce22caa1deea9 2023-07-18 07:15:10,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ea4ab481bbf257ccae5e4a80bbc46fb8 2023-07-18 07:15:10,124 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3376d88003deb7180ffb05b158ef1b8c, disabling compactions & flushes 2023-07-18 07:15:10,124 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ea4ab481bbf257ccae5e4a80bbc46fb8, disabling compactions & flushes 2023-07-18 07:15:10,124 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8. 2023-07-18 07:15:10,124 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8. 2023-07-18 07:15:10,124 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c. 2023-07-18 07:15:10,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8. after waiting 0 ms 2023-07-18 07:15:10,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c. 2023-07-18 07:15:10,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8. 2023-07-18 07:15:10,125 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=7811282a420a2f12fa3ce22caa1deea9, regionState=CLOSED 2023-07-18 07:15:10,125 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664510125"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664510125"}]},"ts":"1689664510125"} 2023-07-18 07:15:10,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c. after waiting 0 ms 2023-07-18 07:15:10,125 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c. 2023-07-18 07:15:10,131 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=64 2023-07-18 07:15:10,131 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=64, state=SUCCESS; CloseRegionProcedure e77138064ccac5bec6f5b87b2a1330e6, server=jenkins-hbase4.apache.org,33769,1689664501155 in 213 msec 2023-07-18 07:15:10,132 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=67 2023-07-18 07:15:10,132 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=67, state=SUCCESS; CloseRegionProcedure 7811282a420a2f12fa3ce22caa1deea9, server=jenkins-hbase4.apache.org,39465,1689664501221 in 210 msec 2023-07-18 07:15:10,133 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/ea4ab481bbf257ccae5e4a80bbc46fb8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:10,133 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e77138064ccac5bec6f5b87b2a1330e6, UNASSIGN in 224 msec 2023-07-18 07:15:10,134 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8. 2023-07-18 07:15:10,134 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ea4ab481bbf257ccae5e4a80bbc46fb8: 2023-07-18 07:15:10,134 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7811282a420a2f12fa3ce22caa1deea9, UNASSIGN in 225 msec 2023-07-18 07:15:10,134 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testTableMoveTruncateAndDrop/3376d88003deb7180ffb05b158ef1b8c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:10,135 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c. 2023-07-18 07:15:10,135 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3376d88003deb7180ffb05b158ef1b8c: 2023-07-18 07:15:10,135 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ea4ab481bbf257ccae5e4a80bbc46fb8 2023-07-18 07:15:10,136 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=ea4ab481bbf257ccae5e4a80bbc46fb8, regionState=CLOSED 2023-07-18 07:15:10,136 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689664510136"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664510136"}]},"ts":"1689664510136"} 2023-07-18 07:15:10,137 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3376d88003deb7180ffb05b158ef1b8c 2023-07-18 07:15:10,138 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=3376d88003deb7180ffb05b158ef1b8c, regionState=CLOSED 2023-07-18 07:15:10,138 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689664510138"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664510138"}]},"ts":"1689664510138"} 2023-07-18 07:15:10,141 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=66 2023-07-18 07:15:10,141 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=66, state=SUCCESS; CloseRegionProcedure ea4ab481bbf257ccae5e4a80bbc46fb8, server=jenkins-hbase4.apache.org,39465,1689664501221 in 219 msec 2023-07-18 07:15:10,143 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=ea4ab481bbf257ccae5e4a80bbc46fb8, UNASSIGN in 234 msec 2023-07-18 07:15:10,146 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=68 2023-07-18 07:15:10,147 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=68, state=SUCCESS; CloseRegionProcedure 3376d88003deb7180ffb05b158ef1b8c, server=jenkins-hbase4.apache.org,33769,1689664501155 in 229 msec 2023-07-18 07:15:10,149 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=63 2023-07-18 07:15:10,149 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3376d88003deb7180ffb05b158ef1b8c, UNASSIGN in 240 msec 2023-07-18 07:15:10,150 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664510150"}]},"ts":"1689664510150"} 2023-07-18 07:15:10,151 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-18 07:15:10,153 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-18 07:15:10,157 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=63, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 269 msec 2023-07-18 07:15:10,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-18 07:15:10,202 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 63 completed 2023-07-18 07:15:10,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:10,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:10,217 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:10,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_847781082' 2023-07-18 07:15:10,218 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:10,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:10,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:10,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:10,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:10,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-18 07:15:10,234 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e77138064ccac5bec6f5b87b2a1330e6 2023-07-18 07:15:10,234 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7811282a420a2f12fa3ce22caa1deea9 2023-07-18 07:15:10,234 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ea4ab481bbf257ccae5e4a80bbc46fb8 2023-07-18 07:15:10,234 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0d0a70f6a7d6920cf5c3d5871364b005 2023-07-18 07:15:10,234 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3376d88003deb7180ffb05b158ef1b8c 2023-07-18 07:15:10,241 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ea4ab481bbf257ccae5e4a80bbc46fb8/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ea4ab481bbf257ccae5e4a80bbc46fb8/recovered.edits] 2023-07-18 07:15:10,241 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3376d88003deb7180ffb05b158ef1b8c/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3376d88003deb7180ffb05b158ef1b8c/recovered.edits] 2023-07-18 07:15:10,242 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e77138064ccac5bec6f5b87b2a1330e6/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e77138064ccac5bec6f5b87b2a1330e6/recovered.edits] 2023-07-18 07:15:10,243 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7811282a420a2f12fa3ce22caa1deea9/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7811282a420a2f12fa3ce22caa1deea9/recovered.edits] 2023-07-18 07:15:10,245 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0d0a70f6a7d6920cf5c3d5871364b005/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0d0a70f6a7d6920cf5c3d5871364b005/recovered.edits] 2023-07-18 07:15:10,255 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3376d88003deb7180ffb05b158ef1b8c/recovered.edits/4.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/Group_testTableMoveTruncateAndDrop/3376d88003deb7180ffb05b158ef1b8c/recovered.edits/4.seqid 2023-07-18 07:15:10,258 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3376d88003deb7180ffb05b158ef1b8c 2023-07-18 07:15:10,258 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7811282a420a2f12fa3ce22caa1deea9/recovered.edits/4.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/Group_testTableMoveTruncateAndDrop/7811282a420a2f12fa3ce22caa1deea9/recovered.edits/4.seqid 2023-07-18 07:15:10,259 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7811282a420a2f12fa3ce22caa1deea9 2023-07-18 07:15:10,260 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0d0a70f6a7d6920cf5c3d5871364b005/recovered.edits/4.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/Group_testTableMoveTruncateAndDrop/0d0a70f6a7d6920cf5c3d5871364b005/recovered.edits/4.seqid 2023-07-18 07:15:10,260 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e77138064ccac5bec6f5b87b2a1330e6/recovered.edits/4.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/Group_testTableMoveTruncateAndDrop/e77138064ccac5bec6f5b87b2a1330e6/recovered.edits/4.seqid 2023-07-18 07:15:10,261 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0d0a70f6a7d6920cf5c3d5871364b005 2023-07-18 07:15:10,261 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e77138064ccac5bec6f5b87b2a1330e6 2023-07-18 07:15:10,261 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ea4ab481bbf257ccae5e4a80bbc46fb8/recovered.edits/4.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/Group_testTableMoveTruncateAndDrop/ea4ab481bbf257ccae5e4a80bbc46fb8/recovered.edits/4.seqid 2023-07-18 07:15:10,262 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testTableMoveTruncateAndDrop/ea4ab481bbf257ccae5e4a80bbc46fb8 2023-07-18 07:15:10,262 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-18 07:15:10,266 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:10,285 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-18 07:15:10,289 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-18 07:15:10,292 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:10,292 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-18 07:15:10,293 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664510292"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:10,293 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664510292"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:10,293 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664510292"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:10,293 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664510292"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:10,293 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664510292"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:10,296 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 07:15:10,296 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e77138064ccac5bec6f5b87b2a1330e6, NAME => 'Group_testTableMoveTruncateAndDrop,,1689664508820.e77138064ccac5bec6f5b87b2a1330e6.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 0d0a70f6a7d6920cf5c3d5871364b005, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689664508820.0d0a70f6a7d6920cf5c3d5871364b005.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => ea4ab481bbf257ccae5e4a80bbc46fb8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689664508820.ea4ab481bbf257ccae5e4a80bbc46fb8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 7811282a420a2f12fa3ce22caa1deea9, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689664508820.7811282a420a2f12fa3ce22caa1deea9.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 3376d88003deb7180ffb05b158ef1b8c, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689664508820.3376d88003deb7180ffb05b158ef1b8c.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 07:15:10,296 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-18 07:15:10,296 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689664510296"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:10,298 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-18 07:15:10,301 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-18 07:15:10,304 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 92 msec 2023-07-18 07:15:10,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-18 07:15:10,335 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 74 completed 2023-07-18 07:15:10,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:10,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:10,340 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33769] ipc.CallRunner(144): callId: 156 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:48992 deadline: 1689664570339, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42375 startCode=1689664504791. As of locationSeqNum=6. 2023-07-18 07:15:10,444 DEBUG [hconnection-0xb2b369f-shared-pool-10] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 07:15:10,447 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33584, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 07:15:10,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:10,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:10,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:10,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:10,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:10,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:33769] to rsgroup default 2023-07-18 07:15:10,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:10,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:10,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:10,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:10,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_847781082, current retry=0 2023-07-18 07:15:10,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33769,1689664501155, jenkins-hbase4.apache.org,39465,1689664501221] are moved back to Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:10,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_847781082 => default 2023-07-18 07:15:10,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:10,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_847781082 2023-07-18 07:15:10,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:10,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:10,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 07:15:10,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:10,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:10,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:10,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:10,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:10,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:10,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:10,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:10,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:10,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:10,532 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:10,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:10,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:10,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:10,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:10,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:10,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:10,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:10,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:10,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:10,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 146 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52448 deadline: 1689665710555, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:10,557 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:10,559 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:10,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:10,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:10,561 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:10,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:10,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:10,592 INFO [Listener at localhost/33473] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=494 (was 425) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57245@0x39cbd810 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1259606006.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xb2b369f-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:42711 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-577766439_17 at /127.0.0.1:46520 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xb2b369f-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1597406304-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57245@0x39cbd810-SendThread(127.0.0.1:57245) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1597406304-639 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/691978700.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xb2b369f-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1597406304-646 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1029298644_17 at /127.0.0.1:32810 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:42375Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1810549220-172.31.14.131-1689664494510:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:42711 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-69851277_17 at /127.0.0.1:46438 [Receiving block BP-1810549220-172.31.14.131-1689664494510:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57245@0x39cbd810-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1597406304-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-69851277_17 at /127.0.0.1:32834 [Receiving block BP-1810549220-172.31.14.131-1689664494510:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1810549220-172.31.14.131-1689664494510:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1597406304-645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-69851277_17 at /127.0.0.1:59712 [Receiving block BP-1810549220-172.31.14.131-1689664494510:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2133554286_17 at /127.0.0.1:59730 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1597406304-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1810549220-172.31.14.131-1689664494510:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1597406304-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:42375-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xb2b369f-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xb2b369f-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9-prefix:jenkins-hbase4.apache.org,42375,1689664504791 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-262b5f70-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:42375 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1597406304-640-acceptor-0@5833607e-ServerConnector@251bffae{HTTP/1.1, (http/1.1)}{0.0.0.0:45257} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xb2b369f-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x320932da-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42375 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x320932da-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=774 (was 675) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=524 (was 517) - SystemLoadAverage LEAK? -, ProcessCount=174 (was 176), AvailableMemoryMB=3177 (was 3055) - AvailableMemoryMB LEAK? - 2023-07-18 07:15:10,616 INFO [Listener at localhost/33473] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=494, OpenFileDescriptor=774, MaxFileDescriptor=60000, SystemLoadAverage=524, ProcessCount=174, AvailableMemoryMB=3176 2023-07-18 07:15:10,616 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-18 07:15:10,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:10,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:10,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:10,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:10,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:10,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:10,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:10,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:10,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:10,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:10,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:10,634 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:10,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:10,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:10,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:10,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:10,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:10,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:10,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:10,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:10,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:10,648 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 174 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52448 deadline: 1689665710647, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:10,648 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:10,650 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:10,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:10,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:10,651 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:10,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:10,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:10,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-18 07:15:10,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:10,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 180 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:52448 deadline: 1689665710653, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 07:15:10,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-18 07:15:10,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:10,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:52448 deadline: 1689665710654, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 07:15:10,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-18 07:15:10,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:10,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:52448 deadline: 1689665710655, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-18 07:15:10,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-18 07:15:10,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-18 07:15:10,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:10,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:10,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:10,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:10,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:10,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:10,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:10,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:10,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:10,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:10,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:10,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:10,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:10,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-18 07:15:10,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:10,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:10,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 07:15:10,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:10,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:10,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:10,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:10,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:10,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:10,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:10,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:10,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:10,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:10,693 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:10,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:10,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:10,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:10,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:10,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:10,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:10,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:10,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:10,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:10,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 218 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52448 deadline: 1689665710708, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:10,709 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:10,711 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:10,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:10,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:10,712 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:10,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:10,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:10,729 INFO [Listener at localhost/33473] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=497 (was 494) Potentially hanging thread: hconnection-0x320932da-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x320932da-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x320932da-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=774 (was 774), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=524 (was 524), ProcessCount=174 (was 174), AvailableMemoryMB=3176 (was 3176) 2023-07-18 07:15:10,748 INFO [Listener at localhost/33473] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=497, OpenFileDescriptor=774, MaxFileDescriptor=60000, SystemLoadAverage=524, ProcessCount=174, AvailableMemoryMB=3173 2023-07-18 07:15:10,749 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-18 07:15:10,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:10,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:10,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:10,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:10,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:10,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:10,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:10,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:10,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:10,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:10,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:10,766 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:10,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:10,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:10,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:10,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:10,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:10,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:10,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:10,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:10,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:10,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 246 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52448 deadline: 1689665710779, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:10,780 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:10,781 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:10,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:10,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:10,783 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:10,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:10,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:10,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:10,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:10,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:10,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:10,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-18 07:15:10,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:10,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 07:15:10,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:10,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:10,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:10,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:10,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:10,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:41293] to rsgroup bar 2023-07-18 07:15:10,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:10,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 07:15:10,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:10,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:10,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(238): Moving server region 428bd5fcdb04976e830cf8a9b852f2cd, which do not belong to RSGroup bar 2023-07-18 07:15:10,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=428bd5fcdb04976e830cf8a9b852f2cd, REOPEN/MOVE 2023-07-18 07:15:10,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-18 07:15:10,807 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=428bd5fcdb04976e830cf8a9b852f2cd, REOPEN/MOVE 2023-07-18 07:15:10,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 07:15:10,808 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=428bd5fcdb04976e830cf8a9b852f2cd, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:10,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-18 07:15:10,809 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-18 07:15:10,809 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689664510808"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664510808"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664510808"}]},"ts":"1689664510808"} 2023-07-18 07:15:10,810 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,41293,1689664501013, state=CLOSING 2023-07-18 07:15:10,811 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure 428bd5fcdb04976e830cf8a9b852f2cd, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:10,813 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 07:15:10,813 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 07:15:10,813 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=76, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:10,964 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:10,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 428bd5fcdb04976e830cf8a9b852f2cd, disabling compactions & flushes 2023-07-18 07:15:10,965 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:10,965 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-18 07:15:10,965 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:10,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. after waiting 0 ms 2023-07-18 07:15:10,966 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:10,966 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 07:15:10,966 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 07:15:10,966 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 07:15:10,966 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 428bd5fcdb04976e830cf8a9b852f2cd 1/1 column families, dataSize=6.36 KB heapSize=10.50 KB 2023-07-18 07:15:10,967 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 07:15:10,967 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 07:15:10,967 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=41.98 KB heapSize=64.98 KB 2023-07-18 07:15:11,010 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.36 KB at sequenceid=26 (bloomFilter=true), to=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd/.tmp/m/154c11a3c1aa4e1e86023ef9ab256c27 2023-07-18 07:15:11,020 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=38.91 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/.tmp/info/18bb111a0ec346a3be9b45fdc1da6807 2023-07-18 07:15:11,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 154c11a3c1aa4e1e86023ef9ab256c27 2023-07-18 07:15:11,024 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd/.tmp/m/154c11a3c1aa4e1e86023ef9ab256c27 as hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd/m/154c11a3c1aa4e1e86023ef9ab256c27 2023-07-18 07:15:11,033 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 18bb111a0ec346a3be9b45fdc1da6807 2023-07-18 07:15:11,035 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 154c11a3c1aa4e1e86023ef9ab256c27 2023-07-18 07:15:11,035 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd/m/154c11a3c1aa4e1e86023ef9ab256c27, entries=9, sequenceid=26, filesize=5.5 K 2023-07-18 07:15:11,037 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.36 KB/6514, heapSize ~10.48 KB/10736, currentSize=0 B/0 for 428bd5fcdb04976e830cf8a9b852f2cd in 71ms, sequenceid=26, compaction requested=false 2023-07-18 07:15:11,054 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-18 07:15:11,054 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 07:15:11,059 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:11,059 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 428bd5fcdb04976e830cf8a9b852f2cd: 2023-07-18 07:15:11,059 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 428bd5fcdb04976e830cf8a9b852f2cd move to jenkins-hbase4.apache.org,42375,1689664504791 record at close sequenceid=26 2023-07-18 07:15:11,061 DEBUG [PEWorker-4] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=77, ppid=75, state=RUNNABLE; CloseRegionProcedure 428bd5fcdb04976e830cf8a9b852f2cd, server=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:11,061 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:11,070 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/.tmp/rep_barrier/f1660cef52a34f259c48c96f1e96980a 2023-07-18 07:15:11,078 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f1660cef52a34f259c48c96f1e96980a 2023-07-18 07:15:11,104 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.91 KB at sequenceid=95 (bloomFilter=false), to=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/.tmp/table/f45b01390c65465d827ef31a468e69f2 2023-07-18 07:15:11,112 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f45b01390c65465d827ef31a468e69f2 2023-07-18 07:15:11,113 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/.tmp/info/18bb111a0ec346a3be9b45fdc1da6807 as hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/info/18bb111a0ec346a3be9b45fdc1da6807 2023-07-18 07:15:11,121 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 18bb111a0ec346a3be9b45fdc1da6807 2023-07-18 07:15:11,121 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/info/18bb111a0ec346a3be9b45fdc1da6807, entries=46, sequenceid=95, filesize=10.2 K 2023-07-18 07:15:11,122 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/.tmp/rep_barrier/f1660cef52a34f259c48c96f1e96980a as hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/rep_barrier/f1660cef52a34f259c48c96f1e96980a 2023-07-18 07:15:11,130 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f1660cef52a34f259c48c96f1e96980a 2023-07-18 07:15:11,130 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/rep_barrier/f1660cef52a34f259c48c96f1e96980a, entries=10, sequenceid=95, filesize=6.1 K 2023-07-18 07:15:11,131 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/.tmp/table/f45b01390c65465d827ef31a468e69f2 as hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/table/f45b01390c65465d827ef31a468e69f2 2023-07-18 07:15:11,138 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f45b01390c65465d827ef31a468e69f2 2023-07-18 07:15:11,139 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/table/f45b01390c65465d827ef31a468e69f2, entries=15, sequenceid=95, filesize=6.2 K 2023-07-18 07:15:11,140 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~41.98 KB/42987, heapSize ~64.94 KB/66496, currentSize=0 B/0 for 1588230740 in 173ms, sequenceid=95, compaction requested=false 2023-07-18 07:15:11,150 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/recovered.edits/98.seqid, newMaxSeqId=98, maxSeqId=1 2023-07-18 07:15:11,151 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 07:15:11,152 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 07:15:11,152 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 07:15:11,152 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,42375,1689664504791 record at close sequenceid=95 2023-07-18 07:15:11,155 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-18 07:15:11,155 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-18 07:15:11,157 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=76 2023-07-18 07:15:11,157 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=76, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,41293,1689664501013 in 342 msec 2023-07-18 07:15:11,158 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=76, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42375,1689664504791; forceNewPlan=false, retain=false 2023-07-18 07:15:11,308 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42375,1689664504791, state=OPENING 2023-07-18 07:15:11,310 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 07:15:11,312 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=76, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:11,312 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 07:15:11,468 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 07:15:11,468 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:11,471 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42375%2C1689664504791.meta, suffix=.meta, logDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/WALs/jenkins-hbase4.apache.org,42375,1689664504791, archiveDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/oldWALs, maxLogs=32 2023-07-18 07:15:11,490 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35973,DS-fba91313-98da-49e7-aca0-d80120d5cf8c,DISK] 2023-07-18 07:15:11,491 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35935,DS-8a96df73-714a-4f6f-97cc-cc27b08692c2,DISK] 2023-07-18 07:15:11,494 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44391,DS-5c4b7b43-3300-44cc-aa7f-e40b05091082,DISK] 2023-07-18 07:15:11,499 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/WALs/jenkins-hbase4.apache.org,42375,1689664504791/jenkins-hbase4.apache.org%2C42375%2C1689664504791.meta.1689664511472.meta 2023-07-18 07:15:11,499 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35973,DS-fba91313-98da-49e7-aca0-d80120d5cf8c,DISK], DatanodeInfoWithStorage[127.0.0.1:35935,DS-8a96df73-714a-4f6f-97cc-cc27b08692c2,DISK], DatanodeInfoWithStorage[127.0.0.1:44391,DS-5c4b7b43-3300-44cc-aa7f-e40b05091082,DISK]] 2023-07-18 07:15:11,499 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:11,500 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 07:15:11,500 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 07:15:11,500 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 07:15:11,500 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 07:15:11,500 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:11,500 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 07:15:11,500 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 07:15:11,502 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 07:15:11,503 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/info 2023-07-18 07:15:11,503 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/info 2023-07-18 07:15:11,504 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 07:15:11,514 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 18bb111a0ec346a3be9b45fdc1da6807 2023-07-18 07:15:11,514 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/info/18bb111a0ec346a3be9b45fdc1da6807 2023-07-18 07:15:11,514 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:11,515 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 07:15:11,516 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/rep_barrier 2023-07-18 07:15:11,516 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/rep_barrier 2023-07-18 07:15:11,516 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 07:15:11,526 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f1660cef52a34f259c48c96f1e96980a 2023-07-18 07:15:11,526 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/rep_barrier/f1660cef52a34f259c48c96f1e96980a 2023-07-18 07:15:11,526 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:11,526 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 07:15:11,527 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/table 2023-07-18 07:15:11,528 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/table 2023-07-18 07:15:11,528 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 07:15:11,537 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f45b01390c65465d827ef31a468e69f2 2023-07-18 07:15:11,537 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/table/f45b01390c65465d827ef31a468e69f2 2023-07-18 07:15:11,537 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:11,538 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740 2023-07-18 07:15:11,539 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740 2023-07-18 07:15:11,542 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 07:15:11,544 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 07:15:11,545 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=99; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11514485760, jitterRate=0.07237005233764648}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 07:15:11,545 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 07:15:11,546 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=79, masterSystemTime=1689664511464 2023-07-18 07:15:11,548 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 07:15:11,548 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 07:15:11,548 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42375,1689664504791, state=OPEN 2023-07-18 07:15:11,550 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 07:15:11,550 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 07:15:11,550 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=428bd5fcdb04976e830cf8a9b852f2cd, regionState=CLOSED 2023-07-18 07:15:11,551 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689664511550"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664511550"}]},"ts":"1689664511550"} 2023-07-18 07:15:11,551 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41293] ipc.CallRunner(144): callId: 185 service: ClientService methodName: Mutate size: 214 connection: 172.31.14.131:58556 deadline: 1689664571551, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42375 startCode=1689664504791. As of locationSeqNum=95. 2023-07-18 07:15:11,552 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=76 2023-07-18 07:15:11,552 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=76, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42375,1689664504791 in 240 msec 2023-07-18 07:15:11,553 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=76, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 745 msec 2023-07-18 07:15:11,657 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-18 07:15:11,657 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; CloseRegionProcedure 428bd5fcdb04976e830cf8a9b852f2cd, server=jenkins-hbase4.apache.org,41293,1689664501013 in 844 msec 2023-07-18 07:15:11,658 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=428bd5fcdb04976e830cf8a9b852f2cd, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42375,1689664504791; forceNewPlan=false, retain=false 2023-07-18 07:15:11,808 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=428bd5fcdb04976e830cf8a9b852f2cd, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:11,808 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689664511808"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664511808"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664511808"}]},"ts":"1689664511808"} 2023-07-18 07:15:11,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-18 07:15:11,811 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=75, state=RUNNABLE; OpenRegionProcedure 428bd5fcdb04976e830cf8a9b852f2cd, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:11,968 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:11,968 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 428bd5fcdb04976e830cf8a9b852f2cd, NAME => 'hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:11,968 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 07:15:11,968 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. service=MultiRowMutationService 2023-07-18 07:15:11,968 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 07:15:11,969 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:11,969 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:11,969 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:11,969 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:11,971 INFO [StoreOpener-428bd5fcdb04976e830cf8a9b852f2cd-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:11,972 DEBUG [StoreOpener-428bd5fcdb04976e830cf8a9b852f2cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd/m 2023-07-18 07:15:11,972 DEBUG [StoreOpener-428bd5fcdb04976e830cf8a9b852f2cd-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd/m 2023-07-18 07:15:11,972 INFO [StoreOpener-428bd5fcdb04976e830cf8a9b852f2cd-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 428bd5fcdb04976e830cf8a9b852f2cd columnFamilyName m 2023-07-18 07:15:11,984 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 154c11a3c1aa4e1e86023ef9ab256c27 2023-07-18 07:15:11,985 DEBUG [StoreOpener-428bd5fcdb04976e830cf8a9b852f2cd-1] regionserver.HStore(539): loaded hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd/m/154c11a3c1aa4e1e86023ef9ab256c27 2023-07-18 07:15:11,986 INFO [StoreOpener-428bd5fcdb04976e830cf8a9b852f2cd-1] regionserver.HStore(310): Store=428bd5fcdb04976e830cf8a9b852f2cd/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:11,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:11,989 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:11,993 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:11,994 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 428bd5fcdb04976e830cf8a9b852f2cd; next sequenceid=30; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@496ac87, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:11,994 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 428bd5fcdb04976e830cf8a9b852f2cd: 2023-07-18 07:15:11,994 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd., pid=80, masterSystemTime=1689664511963 2023-07-18 07:15:11,996 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:11,996 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:11,997 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=428bd5fcdb04976e830cf8a9b852f2cd, regionState=OPEN, openSeqNum=30, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:11,997 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689664511997"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664511997"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664511997"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664511997"}]},"ts":"1689664511997"} 2023-07-18 07:15:12,004 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=75 2023-07-18 07:15:12,004 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=75, state=SUCCESS; OpenRegionProcedure 428bd5fcdb04976e830cf8a9b852f2cd, server=jenkins-hbase4.apache.org,42375,1689664504791 in 187 msec 2023-07-18 07:15:12,005 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=428bd5fcdb04976e830cf8a9b852f2cd, REOPEN/MOVE in 1.1990 sec 2023-07-18 07:15:12,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33769,1689664501155, jenkins-hbase4.apache.org,39465,1689664501221, jenkins-hbase4.apache.org,41293,1689664501013] are moved back to default 2023-07-18 07:15:12,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-18 07:15:12,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:12,811 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41293] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:58596 deadline: 1689664572810, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42375 startCode=1689664504791. As of locationSeqNum=26. 2023-07-18 07:15:12,914 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41293] ipc.CallRunner(144): callId: 12 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:58596 deadline: 1689664572913, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42375 startCode=1689664504791. As of locationSeqNum=95. 2023-07-18 07:15:13,016 DEBUG [hconnection-0x320932da-shared-pool-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 07:15:13,025 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33586, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 07:15:13,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:13,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:13,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-18 07:15:13,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:13,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:13,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-18 07:15:13,046 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:13,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-18 07:15:13,046 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41293] ipc.CallRunner(144): callId: 190 service: ClientService methodName: ExecService size: 528 connection: 172.31.14.131:58556 deadline: 1689664573046, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42375 startCode=1689664504791. As of locationSeqNum=26. 2023-07-18 07:15:13,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-18 07:15:13,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-18 07:15:13,152 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:13,152 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 07:15:13,153 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:13,153 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:13,157 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 07:15:13,159 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:13,160 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e empty. 2023-07-18 07:15:13,160 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:13,160 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-18 07:15:13,181 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:13,183 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => ad115f12c70bcead9c9b2f13233c123e, NAME => 'Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:13,201 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:13,201 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing ad115f12c70bcead9c9b2f13233c123e, disabling compactions & flushes 2023-07-18 07:15:13,201 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:13,202 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:13,202 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. after waiting 0 ms 2023-07-18 07:15:13,202 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:13,202 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:13,202 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for ad115f12c70bcead9c9b2f13233c123e: 2023-07-18 07:15:13,205 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 07:15:13,207 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689664513207"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664513207"}]},"ts":"1689664513207"} 2023-07-18 07:15:13,209 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 07:15:13,210 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 07:15:13,210 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664513210"}]},"ts":"1689664513210"} 2023-07-18 07:15:13,212 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-18 07:15:13,222 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ad115f12c70bcead9c9b2f13233c123e, ASSIGN}] 2023-07-18 07:15:13,224 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ad115f12c70bcead9c9b2f13233c123e, ASSIGN 2023-07-18 07:15:13,225 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ad115f12c70bcead9c9b2f13233c123e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42375,1689664504791; forceNewPlan=false, retain=false 2023-07-18 07:15:13,350 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-18 07:15:13,377 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=ad115f12c70bcead9c9b2f13233c123e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:13,377 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689664513377"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664513377"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664513377"}]},"ts":"1689664513377"} 2023-07-18 07:15:13,379 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure ad115f12c70bcead9c9b2f13233c123e, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:13,535 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:13,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ad115f12c70bcead9c9b2f13233c123e, NAME => 'Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:13,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:13,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:13,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:13,535 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:13,537 INFO [StoreOpener-ad115f12c70bcead9c9b2f13233c123e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:13,539 DEBUG [StoreOpener-ad115f12c70bcead9c9b2f13233c123e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e/f 2023-07-18 07:15:13,539 DEBUG [StoreOpener-ad115f12c70bcead9c9b2f13233c123e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e/f 2023-07-18 07:15:13,539 INFO [StoreOpener-ad115f12c70bcead9c9b2f13233c123e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ad115f12c70bcead9c9b2f13233c123e columnFamilyName f 2023-07-18 07:15:13,540 INFO [StoreOpener-ad115f12c70bcead9c9b2f13233c123e-1] regionserver.HStore(310): Store=ad115f12c70bcead9c9b2f13233c123e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:13,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:13,541 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:13,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:13,547 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:13,548 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ad115f12c70bcead9c9b2f13233c123e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10361641760, jitterRate=-0.03499691188335419}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:13,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ad115f12c70bcead9c9b2f13233c123e: 2023-07-18 07:15:13,549 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e., pid=83, masterSystemTime=1689664513531 2023-07-18 07:15:13,550 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:13,550 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:13,551 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=ad115f12c70bcead9c9b2f13233c123e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:13,551 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689664513551"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664513551"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664513551"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664513551"}]},"ts":"1689664513551"} 2023-07-18 07:15:13,555 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-18 07:15:13,555 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure ad115f12c70bcead9c9b2f13233c123e, server=jenkins-hbase4.apache.org,42375,1689664504791 in 174 msec 2023-07-18 07:15:13,557 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-18 07:15:13,557 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ad115f12c70bcead9c9b2f13233c123e, ASSIGN in 333 msec 2023-07-18 07:15:13,559 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 07:15:13,559 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664513559"}]},"ts":"1689664513559"} 2023-07-18 07:15:13,565 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-18 07:15:13,568 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 07:15:13,570 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 525 msec 2023-07-18 07:15:13,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-18 07:15:13,652 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-18 07:15:13,652 DEBUG [Listener at localhost/33473] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-18 07:15:13,652 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:13,653 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41293] ipc.CallRunner(144): callId: 275 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:58582 deadline: 1689664573653, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=42375 startCode=1689664504791. As of locationSeqNum=95. 2023-07-18 07:15:13,756 DEBUG [hconnection-0xd65c55c-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 07:15:13,759 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33598, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 07:15:13,767 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-18 07:15:13,767 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:13,767 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-18 07:15:13,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-18 07:15:13,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:13,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 07:15:13,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:13,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:13,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-18 07:15:13,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(345): Moving region ad115f12c70bcead9c9b2f13233c123e to RSGroup bar 2023-07-18 07:15:13,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:13,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:13,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:13,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:13,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-18 07:15:13,780 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:13,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ad115f12c70bcead9c9b2f13233c123e, REOPEN/MOVE 2023-07-18 07:15:13,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-18 07:15:13,783 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ad115f12c70bcead9c9b2f13233c123e, REOPEN/MOVE 2023-07-18 07:15:13,784 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ad115f12c70bcead9c9b2f13233c123e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:13,784 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689664513784"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664513784"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664513784"}]},"ts":"1689664513784"} 2023-07-18 07:15:13,786 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure ad115f12c70bcead9c9b2f13233c123e, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:13,939 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:13,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ad115f12c70bcead9c9b2f13233c123e, disabling compactions & flushes 2023-07-18 07:15:13,941 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:13,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:13,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. after waiting 0 ms 2023-07-18 07:15:13,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:13,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:13,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:13,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ad115f12c70bcead9c9b2f13233c123e: 2023-07-18 07:15:13,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ad115f12c70bcead9c9b2f13233c123e move to jenkins-hbase4.apache.org,41293,1689664501013 record at close sequenceid=2 2023-07-18 07:15:13,948 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:13,948 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ad115f12c70bcead9c9b2f13233c123e, regionState=CLOSED 2023-07-18 07:15:13,949 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689664513948"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664513948"}]},"ts":"1689664513948"} 2023-07-18 07:15:13,953 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-18 07:15:13,953 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure ad115f12c70bcead9c9b2f13233c123e, server=jenkins-hbase4.apache.org,42375,1689664504791 in 164 msec 2023-07-18 07:15:13,953 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ad115f12c70bcead9c9b2f13233c123e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41293,1689664501013; forceNewPlan=false, retain=false 2023-07-18 07:15:14,104 INFO [jenkins-hbase4:33141] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 07:15:14,104 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ad115f12c70bcead9c9b2f13233c123e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:14,104 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689664514104"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664514104"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664514104"}]},"ts":"1689664514104"} 2023-07-18 07:15:14,107 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure ad115f12c70bcead9c9b2f13233c123e, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:14,132 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 07:15:14,267 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:14,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ad115f12c70bcead9c9b2f13233c123e, NAME => 'Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:14,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:14,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:14,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:14,267 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:14,269 INFO [StoreOpener-ad115f12c70bcead9c9b2f13233c123e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:14,270 DEBUG [StoreOpener-ad115f12c70bcead9c9b2f13233c123e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e/f 2023-07-18 07:15:14,270 DEBUG [StoreOpener-ad115f12c70bcead9c9b2f13233c123e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e/f 2023-07-18 07:15:14,271 INFO [StoreOpener-ad115f12c70bcead9c9b2f13233c123e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ad115f12c70bcead9c9b2f13233c123e columnFamilyName f 2023-07-18 07:15:14,271 INFO [StoreOpener-ad115f12c70bcead9c9b2f13233c123e-1] regionserver.HStore(310): Store=ad115f12c70bcead9c9b2f13233c123e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:14,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:14,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:14,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:14,279 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ad115f12c70bcead9c9b2f13233c123e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11157479360, jitterRate=0.03912124037742615}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:14,279 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ad115f12c70bcead9c9b2f13233c123e: 2023-07-18 07:15:14,280 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e., pid=86, masterSystemTime=1689664514262 2023-07-18 07:15:14,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:14,282 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:14,283 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=ad115f12c70bcead9c9b2f13233c123e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:14,283 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689664514282"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664514282"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664514282"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664514282"}]},"ts":"1689664514282"} 2023-07-18 07:15:14,287 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-18 07:15:14,287 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure ad115f12c70bcead9c9b2f13233c123e, server=jenkins-hbase4.apache.org,41293,1689664501013 in 179 msec 2023-07-18 07:15:14,288 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ad115f12c70bcead9c9b2f13233c123e, REOPEN/MOVE in 507 msec 2023-07-18 07:15:14,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-18 07:15:14,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-18 07:15:14,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:14,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:14,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:14,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-18 07:15:14,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:14,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 07:15:14,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:14,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 285 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:52448 deadline: 1689665714793, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-18 07:15:14,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:41293] to rsgroup default 2023-07-18 07:15:14,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:14,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:52448 deadline: 1689665714795, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-18 07:15:14,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-18 07:15:14,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:14,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 07:15:14,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:14,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:14,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-18 07:15:14,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(345): Moving region ad115f12c70bcead9c9b2f13233c123e to RSGroup default 2023-07-18 07:15:14,805 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ad115f12c70bcead9c9b2f13233c123e, REOPEN/MOVE 2023-07-18 07:15:14,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 07:15:14,806 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ad115f12c70bcead9c9b2f13233c123e, REOPEN/MOVE 2023-07-18 07:15:14,807 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ad115f12c70bcead9c9b2f13233c123e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:14,807 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689664514807"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664514807"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664514807"}]},"ts":"1689664514807"} 2023-07-18 07:15:14,811 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure ad115f12c70bcead9c9b2f13233c123e, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:14,971 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:14,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ad115f12c70bcead9c9b2f13233c123e, disabling compactions & flushes 2023-07-18 07:15:14,974 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:14,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:14,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. after waiting 0 ms 2023-07-18 07:15:14,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:14,978 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 07:15:14,979 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:14,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ad115f12c70bcead9c9b2f13233c123e: 2023-07-18 07:15:14,979 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ad115f12c70bcead9c9b2f13233c123e move to jenkins-hbase4.apache.org,42375,1689664504791 record at close sequenceid=5 2023-07-18 07:15:14,982 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:14,982 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ad115f12c70bcead9c9b2f13233c123e, regionState=CLOSED 2023-07-18 07:15:14,982 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689664514982"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664514982"}]},"ts":"1689664514982"} 2023-07-18 07:15:14,985 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-18 07:15:14,985 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure ad115f12c70bcead9c9b2f13233c123e, server=jenkins-hbase4.apache.org,41293,1689664501013 in 176 msec 2023-07-18 07:15:14,986 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ad115f12c70bcead9c9b2f13233c123e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42375,1689664504791; forceNewPlan=false, retain=false 2023-07-18 07:15:15,136 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ad115f12c70bcead9c9b2f13233c123e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:15,137 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689664515136"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664515136"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664515136"}]},"ts":"1689664515136"} 2023-07-18 07:15:15,139 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure ad115f12c70bcead9c9b2f13233c123e, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:15,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:15,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ad115f12c70bcead9c9b2f13233c123e, NAME => 'Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:15,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:15,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:15,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:15,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:15,297 INFO [StoreOpener-ad115f12c70bcead9c9b2f13233c123e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:15,299 DEBUG [StoreOpener-ad115f12c70bcead9c9b2f13233c123e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e/f 2023-07-18 07:15:15,299 DEBUG [StoreOpener-ad115f12c70bcead9c9b2f13233c123e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e/f 2023-07-18 07:15:15,299 INFO [StoreOpener-ad115f12c70bcead9c9b2f13233c123e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ad115f12c70bcead9c9b2f13233c123e columnFamilyName f 2023-07-18 07:15:15,300 INFO [StoreOpener-ad115f12c70bcead9c9b2f13233c123e-1] regionserver.HStore(310): Store=ad115f12c70bcead9c9b2f13233c123e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:15,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:15,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:15,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:15,306 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ad115f12c70bcead9c9b2f13233c123e; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11110717600, jitterRate=0.03476621210575104}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:15,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ad115f12c70bcead9c9b2f13233c123e: 2023-07-18 07:15:15,307 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e., pid=89, masterSystemTime=1689664515291 2023-07-18 07:15:15,309 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:15,309 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:15,309 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=ad115f12c70bcead9c9b2f13233c123e, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:15,310 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689664515309"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664515309"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664515309"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664515309"}]},"ts":"1689664515309"} 2023-07-18 07:15:15,313 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-18 07:15:15,313 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure ad115f12c70bcead9c9b2f13233c123e, server=jenkins-hbase4.apache.org,42375,1689664504791 in 172 msec 2023-07-18 07:15:15,315 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ad115f12c70bcead9c9b2f13233c123e, REOPEN/MOVE in 509 msec 2023-07-18 07:15:15,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-18 07:15:15,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-18 07:15:15,806 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:15,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:15,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:15,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 07:15:15,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:15,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 294 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:52448 deadline: 1689665715813, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-18 07:15:15,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:41293] to rsgroup default 2023-07-18 07:15:15,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:15,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-18 07:15:15,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:15,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:15,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-18 07:15:15,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33769,1689664501155, jenkins-hbase4.apache.org,39465,1689664501221, jenkins-hbase4.apache.org,41293,1689664501013] are moved back to bar 2023-07-18 07:15:15,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-18 07:15:15,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:15,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:15,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:15,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-18 07:15:15,829 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:15,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:15,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 07:15:15,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:15,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:15,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:15,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:15,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:15,839 INFO [Listener at localhost/33473] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-18 07:15:15,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-18 07:15:15,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-18 07:15:15,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-18 07:15:15,843 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664515843"}]},"ts":"1689664515843"} 2023-07-18 07:15:15,845 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-18 07:15:15,847 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-18 07:15:15,847 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ad115f12c70bcead9c9b2f13233c123e, UNASSIGN}] 2023-07-18 07:15:15,849 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ad115f12c70bcead9c9b2f13233c123e, UNASSIGN 2023-07-18 07:15:15,850 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=ad115f12c70bcead9c9b2f13233c123e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:15,850 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689664515850"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664515850"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664515850"}]},"ts":"1689664515850"} 2023-07-18 07:15:15,852 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure ad115f12c70bcead9c9b2f13233c123e, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:15,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-18 07:15:16,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:16,005 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ad115f12c70bcead9c9b2f13233c123e, disabling compactions & flushes 2023-07-18 07:15:16,005 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:16,005 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:16,005 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. after waiting 0 ms 2023-07-18 07:15:16,005 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:16,010 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 07:15:16,011 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e. 2023-07-18 07:15:16,011 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ad115f12c70bcead9c9b2f13233c123e: 2023-07-18 07:15:16,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:16,015 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=ad115f12c70bcead9c9b2f13233c123e, regionState=CLOSED 2023-07-18 07:15:16,015 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689664516015"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664516015"}]},"ts":"1689664516015"} 2023-07-18 07:15:16,022 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-18 07:15:16,022 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure ad115f12c70bcead9c9b2f13233c123e, server=jenkins-hbase4.apache.org,42375,1689664504791 in 165 msec 2023-07-18 07:15:16,026 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-18 07:15:16,026 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=ad115f12c70bcead9c9b2f13233c123e, UNASSIGN in 175 msec 2023-07-18 07:15:16,027 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664516027"}]},"ts":"1689664516027"} 2023-07-18 07:15:16,028 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-18 07:15:16,031 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-18 07:15:16,045 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 204 msec 2023-07-18 07:15:16,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-18 07:15:16,146 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-18 07:15:16,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-18 07:15:16,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 07:15:16,153 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 07:15:16,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-18 07:15:16,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:16,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:16,157 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:16,159 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 07:15:16,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-18 07:15:16,164 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:16,166 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e/recovered.edits] 2023-07-18 07:15:16,172 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e/recovered.edits/10.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e/recovered.edits/10.seqid 2023-07-18 07:15:16,173 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testFailRemoveGroup/ad115f12c70bcead9c9b2f13233c123e 2023-07-18 07:15:16,173 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-18 07:15:16,176 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 07:15:16,179 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-18 07:15:16,181 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-18 07:15:16,182 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 07:15:16,182 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-18 07:15:16,182 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664516182"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:16,184 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 07:15:16,184 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ad115f12c70bcead9c9b2f13233c123e, NAME => 'Group_testFailRemoveGroup,,1689664513042.ad115f12c70bcead9c9b2f13233c123e.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 07:15:16,184 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-18 07:15:16,184 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689664516184"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:16,185 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-18 07:15:16,187 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-18 07:15:16,188 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 40 msec 2023-07-18 07:15:16,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-18 07:15:16,264 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-18 07:15:16,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:16,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:16,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:16,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:16,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:16,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:16,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:16,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:16,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:16,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:16,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:16,284 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:16,285 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:16,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:16,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:16,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:16,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:16,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:16,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:16,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:16,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:16,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 342 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52448 deadline: 1689665716298, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:16,299 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:16,301 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:16,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:16,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:16,302 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:16,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:16,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:16,322 INFO [Listener at localhost/33473] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=512 (was 497) Potentially hanging thread: hconnection-0x320932da-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/cluster_358b183c-ed7f-22d8-bc16-6d714b54baf2/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xb2b369f-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x320932da-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xb2b369f-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xb2b369f-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-69851277_17 at /127.0.0.1:36858 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x320932da-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-577766439_17 at /127.0.0.1:41740 [Waiting for operation #8] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1810549220-172.31.14.131-1689664494510:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-69851277_17 at /127.0.0.1:36864 [Receiving block BP-1810549220-172.31.14.131-1689664494510:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x320932da-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1810549220-172.31.14.131-1689664494510:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xb2b369f-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-69851277_17 at /127.0.0.1:41776 [Receiving block BP-1810549220-172.31.14.131-1689664494510:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xb2b369f-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/cluster_358b183c-ed7f-22d8-bc16-6d714b54baf2/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-69851277_17 at /127.0.0.1:39804 [Receiving block BP-1810549220-172.31.14.131-1689664494510:blk_1073741858_1034] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9-prefix:jenkins-hbase4.apache.org,42375,1689664504791.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xd65c55c-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/cluster_358b183c-ed7f-22d8-bc16-6d714b54baf2/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1810549220-172.31.14.131-1689664494510:blk_1073741858_1034, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x320932da-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xb2b369f-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-577766439_17 at /127.0.0.1:39818 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/cluster_358b183c-ed7f-22d8-bc16-6d714b54baf2/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=788 (was 774) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=490 (was 524), ProcessCount=174 (was 174), AvailableMemoryMB=2867 (was 3173) 2023-07-18 07:15:16,323 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-18 07:15:16,341 INFO [Listener at localhost/33473] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=512, OpenFileDescriptor=788, MaxFileDescriptor=60000, SystemLoadAverage=490, ProcessCount=174, AvailableMemoryMB=2865 2023-07-18 07:15:16,341 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-18 07:15:16,341 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-18 07:15:16,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:16,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:16,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:16,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:16,347 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:16,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:16,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:16,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:16,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:16,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:16,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:16,361 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:16,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:16,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:16,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:16,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:16,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:16,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:16,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:16,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:16,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:16,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 370 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52448 deadline: 1689665716382, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:16,383 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:16,387 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:16,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:16,388 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:16,388 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:16,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:16,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:16,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:16,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:16,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_824962729 2023-07-18 07:15:16,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_824962729 2023-07-18 07:15:16,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:16,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:16,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:16,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:16,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:16,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:16,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33769] to rsgroup Group_testMultiTableMove_824962729 2023-07-18 07:15:16,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_824962729 2023-07-18 07:15:16,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:16,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:16,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:16,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 07:15:16,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33769,1689664501155] are moved back to default 2023-07-18 07:15:16,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_824962729 2023-07-18 07:15:16,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:16,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:16,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:16,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_824962729 2023-07-18 07:15:16,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:16,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:16,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 07:15:16,420 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:16,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-18 07:15:16,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 07:15:16,422 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_824962729 2023-07-18 07:15:16,423 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:16,423 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:16,424 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:16,430 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 07:15:16,432 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:16,433 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51 empty. 2023-07-18 07:15:16,433 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:16,433 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-18 07:15:16,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 07:15:16,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 07:15:16,858 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:16,861 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 196f48d1c2221e7e21a740e59c872c51, NAME => 'GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:16,893 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:16,894 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 196f48d1c2221e7e21a740e59c872c51, disabling compactions & flushes 2023-07-18 07:15:16,894 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:16,894 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:16,894 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. after waiting 0 ms 2023-07-18 07:15:16,894 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:16,894 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:16,894 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 196f48d1c2221e7e21a740e59c872c51: 2023-07-18 07:15:16,897 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 07:15:16,899 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664516898"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664516898"}]},"ts":"1689664516898"} 2023-07-18 07:15:16,901 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 07:15:16,902 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 07:15:16,902 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664516902"}]},"ts":"1689664516902"} 2023-07-18 07:15:16,904 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-18 07:15:16,908 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:16,908 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:16,908 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:16,908 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:16,908 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:16,908 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=196f48d1c2221e7e21a740e59c872c51, ASSIGN}] 2023-07-18 07:15:16,910 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=196f48d1c2221e7e21a740e59c872c51, ASSIGN 2023-07-18 07:15:16,915 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=196f48d1c2221e7e21a740e59c872c51, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39465,1689664501221; forceNewPlan=false, retain=false 2023-07-18 07:15:16,945 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 07:15:16,946 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-18 07:15:17,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 07:15:17,065 INFO [jenkins-hbase4:33141] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 07:15:17,067 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=196f48d1c2221e7e21a740e59c872c51, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:17,067 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664517066"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664517066"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664517066"}]},"ts":"1689664517066"} 2023-07-18 07:15:17,069 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure 196f48d1c2221e7e21a740e59c872c51, server=jenkins-hbase4.apache.org,39465,1689664501221}] 2023-07-18 07:15:17,225 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:17,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 196f48d1c2221e7e21a740e59c872c51, NAME => 'GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:17,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:17,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:17,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:17,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:17,229 INFO [StoreOpener-196f48d1c2221e7e21a740e59c872c51-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:17,230 DEBUG [StoreOpener-196f48d1c2221e7e21a740e59c872c51-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51/f 2023-07-18 07:15:17,231 DEBUG [StoreOpener-196f48d1c2221e7e21a740e59c872c51-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51/f 2023-07-18 07:15:17,231 INFO [StoreOpener-196f48d1c2221e7e21a740e59c872c51-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 196f48d1c2221e7e21a740e59c872c51 columnFamilyName f 2023-07-18 07:15:17,232 INFO [StoreOpener-196f48d1c2221e7e21a740e59c872c51-1] regionserver.HStore(310): Store=196f48d1c2221e7e21a740e59c872c51/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:17,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:17,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:17,237 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:17,242 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:17,242 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 196f48d1c2221e7e21a740e59c872c51; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10618961120, jitterRate=-0.01103217899799347}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:17,242 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 196f48d1c2221e7e21a740e59c872c51: 2023-07-18 07:15:17,243 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51., pid=96, masterSystemTime=1689664517220 2023-07-18 07:15:17,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:17,245 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:17,245 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=196f48d1c2221e7e21a740e59c872c51, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:17,246 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664517245"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664517245"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664517245"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664517245"}]},"ts":"1689664517245"} 2023-07-18 07:15:17,249 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-18 07:15:17,249 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure 196f48d1c2221e7e21a740e59c872c51, server=jenkins-hbase4.apache.org,39465,1689664501221 in 178 msec 2023-07-18 07:15:17,251 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-18 07:15:17,251 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=196f48d1c2221e7e21a740e59c872c51, ASSIGN in 341 msec 2023-07-18 07:15:17,252 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 07:15:17,252 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664517252"}]},"ts":"1689664517252"} 2023-07-18 07:15:17,255 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-18 07:15:17,257 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 07:15:17,259 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 840 msec 2023-07-18 07:15:17,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-18 07:15:17,526 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-18 07:15:17,527 DEBUG [Listener at localhost/33473] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-18 07:15:17,527 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:17,531 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-18 07:15:17,531 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:17,531 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-18 07:15:17,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:17,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 07:15:17,536 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:17,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-18 07:15:17,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 07:15:17,538 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_824962729 2023-07-18 07:15:17,539 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:17,539 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:17,540 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:17,542 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 07:15:17,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 07:15:17,703 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:17,705 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b empty. 2023-07-18 07:15:17,705 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:17,705 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-18 07:15:17,725 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:17,727 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => ba078268ca43ed8d8506ec5f3f00bb7b, NAME => 'GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:17,750 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:17,750 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing ba078268ca43ed8d8506ec5f3f00bb7b, disabling compactions & flushes 2023-07-18 07:15:17,750 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:17,750 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:17,750 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. after waiting 0 ms 2023-07-18 07:15:17,750 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:17,750 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:17,750 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for ba078268ca43ed8d8506ec5f3f00bb7b: 2023-07-18 07:15:17,753 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 07:15:17,755 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664517755"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664517755"}]},"ts":"1689664517755"} 2023-07-18 07:15:17,756 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 07:15:17,757 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 07:15:17,758 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664517757"}]},"ts":"1689664517757"} 2023-07-18 07:15:17,759 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-18 07:15:17,762 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:17,762 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:17,762 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:17,762 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:17,762 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:17,763 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ba078268ca43ed8d8506ec5f3f00bb7b, ASSIGN}] 2023-07-18 07:15:17,765 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ba078268ca43ed8d8506ec5f3f00bb7b, ASSIGN 2023-07-18 07:15:17,766 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ba078268ca43ed8d8506ec5f3f00bb7b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39465,1689664501221; forceNewPlan=false, retain=false 2023-07-18 07:15:17,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 07:15:17,916 INFO [jenkins-hbase4:33141] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 07:15:17,918 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=ba078268ca43ed8d8506ec5f3f00bb7b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:17,918 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664517918"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664517918"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664517918"}]},"ts":"1689664517918"} 2023-07-18 07:15:17,920 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure ba078268ca43ed8d8506ec5f3f00bb7b, server=jenkins-hbase4.apache.org,39465,1689664501221}] 2023-07-18 07:15:18,076 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:18,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ba078268ca43ed8d8506ec5f3f00bb7b, NAME => 'GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:18,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:18,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:18,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:18,077 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:18,080 INFO [StoreOpener-ba078268ca43ed8d8506ec5f3f00bb7b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:18,082 DEBUG [StoreOpener-ba078268ca43ed8d8506ec5f3f00bb7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b/f 2023-07-18 07:15:18,082 DEBUG [StoreOpener-ba078268ca43ed8d8506ec5f3f00bb7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b/f 2023-07-18 07:15:18,083 INFO [StoreOpener-ba078268ca43ed8d8506ec5f3f00bb7b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ba078268ca43ed8d8506ec5f3f00bb7b columnFamilyName f 2023-07-18 07:15:18,083 INFO [StoreOpener-ba078268ca43ed8d8506ec5f3f00bb7b-1] regionserver.HStore(310): Store=ba078268ca43ed8d8506ec5f3f00bb7b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:18,084 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:18,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:18,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:18,090 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:18,090 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ba078268ca43ed8d8506ec5f3f00bb7b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10052373760, jitterRate=-0.06379973888397217}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:18,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ba078268ca43ed8d8506ec5f3f00bb7b: 2023-07-18 07:15:18,091 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b., pid=99, masterSystemTime=1689664518071 2023-07-18 07:15:18,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:18,093 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:18,093 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=ba078268ca43ed8d8506ec5f3f00bb7b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:18,094 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664518093"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664518093"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664518093"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664518093"}]},"ts":"1689664518093"} 2023-07-18 07:15:18,096 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-18 07:15:18,096 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure ba078268ca43ed8d8506ec5f3f00bb7b, server=jenkins-hbase4.apache.org,39465,1689664501221 in 175 msec 2023-07-18 07:15:18,098 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-18 07:15:18,098 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ba078268ca43ed8d8506ec5f3f00bb7b, ASSIGN in 333 msec 2023-07-18 07:15:18,099 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 07:15:18,099 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664518099"}]},"ts":"1689664518099"} 2023-07-18 07:15:18,100 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-18 07:15:18,105 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 07:15:18,107 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 572 msec 2023-07-18 07:15:18,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-18 07:15:18,205 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-18 07:15:18,205 DEBUG [Listener at localhost/33473] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-18 07:15:18,206 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:18,210 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-18 07:15:18,211 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:18,211 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-18 07:15:18,212 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:18,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-18 07:15:18,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:18,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-18 07:15:18,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:18,229 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_824962729 2023-07-18 07:15:18,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_824962729 2023-07-18 07:15:18,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_824962729 2023-07-18 07:15:18,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:18,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:18,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:18,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_824962729 2023-07-18 07:15:18,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(345): Moving region ba078268ca43ed8d8506ec5f3f00bb7b to RSGroup Group_testMultiTableMove_824962729 2023-07-18 07:15:18,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ba078268ca43ed8d8506ec5f3f00bb7b, REOPEN/MOVE 2023-07-18 07:15:18,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_824962729 2023-07-18 07:15:18,245 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ba078268ca43ed8d8506ec5f3f00bb7b, REOPEN/MOVE 2023-07-18 07:15:18,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(345): Moving region 196f48d1c2221e7e21a740e59c872c51 to RSGroup Group_testMultiTableMove_824962729 2023-07-18 07:15:18,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=196f48d1c2221e7e21a740e59c872c51, REOPEN/MOVE 2023-07-18 07:15:18,247 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=ba078268ca43ed8d8506ec5f3f00bb7b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:18,247 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664518247"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664518247"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664518247"}]},"ts":"1689664518247"} 2023-07-18 07:15:18,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_824962729, current retry=0 2023-07-18 07:15:18,249 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=196f48d1c2221e7e21a740e59c872c51, REOPEN/MOVE 2023-07-18 07:15:18,250 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=196f48d1c2221e7e21a740e59c872c51, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:18,250 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664518250"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664518250"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664518250"}]},"ts":"1689664518250"} 2023-07-18 07:15:18,250 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure ba078268ca43ed8d8506ec5f3f00bb7b, server=jenkins-hbase4.apache.org,39465,1689664501221}] 2023-07-18 07:15:18,253 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure 196f48d1c2221e7e21a740e59c872c51, server=jenkins-hbase4.apache.org,39465,1689664501221}] 2023-07-18 07:15:18,405 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:18,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 196f48d1c2221e7e21a740e59c872c51, disabling compactions & flushes 2023-07-18 07:15:18,406 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:18,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:18,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. after waiting 0 ms 2023-07-18 07:15:18,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:18,411 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:18,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:18,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 196f48d1c2221e7e21a740e59c872c51: 2023-07-18 07:15:18,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 196f48d1c2221e7e21a740e59c872c51 move to jenkins-hbase4.apache.org,33769,1689664501155 record at close sequenceid=2 2023-07-18 07:15:18,413 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:18,413 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:18,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ba078268ca43ed8d8506ec5f3f00bb7b, disabling compactions & flushes 2023-07-18 07:15:18,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:18,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:18,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. after waiting 0 ms 2023-07-18 07:15:18,415 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:18,419 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=196f48d1c2221e7e21a740e59c872c51, regionState=CLOSED 2023-07-18 07:15:18,419 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664518419"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664518419"}]},"ts":"1689664518419"} 2023-07-18 07:15:18,422 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-18 07:15:18,423 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure 196f48d1c2221e7e21a740e59c872c51, server=jenkins-hbase4.apache.org,39465,1689664501221 in 167 msec 2023-07-18 07:15:18,423 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:18,423 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=196f48d1c2221e7e21a740e59c872c51, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33769,1689664501155; forceNewPlan=false, retain=false 2023-07-18 07:15:18,424 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:18,424 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ba078268ca43ed8d8506ec5f3f00bb7b: 2023-07-18 07:15:18,424 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ba078268ca43ed8d8506ec5f3f00bb7b move to jenkins-hbase4.apache.org,33769,1689664501155 record at close sequenceid=2 2023-07-18 07:15:18,425 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:18,427 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=ba078268ca43ed8d8506ec5f3f00bb7b, regionState=CLOSED 2023-07-18 07:15:18,427 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664518426"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664518426"}]},"ts":"1689664518426"} 2023-07-18 07:15:18,432 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-18 07:15:18,432 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure ba078268ca43ed8d8506ec5f3f00bb7b, server=jenkins-hbase4.apache.org,39465,1689664501221 in 178 msec 2023-07-18 07:15:18,433 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ba078268ca43ed8d8506ec5f3f00bb7b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33769,1689664501155; forceNewPlan=false, retain=false 2023-07-18 07:15:18,574 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=ba078268ca43ed8d8506ec5f3f00bb7b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:18,574 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=196f48d1c2221e7e21a740e59c872c51, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:18,574 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664518574"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664518574"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664518574"}]},"ts":"1689664518574"} 2023-07-18 07:15:18,574 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664518574"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664518574"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664518574"}]},"ts":"1689664518574"} 2023-07-18 07:15:18,576 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=100, state=RUNNABLE; OpenRegionProcedure ba078268ca43ed8d8506ec5f3f00bb7b, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:18,579 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=101, state=RUNNABLE; OpenRegionProcedure 196f48d1c2221e7e21a740e59c872c51, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:18,732 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:18,732 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 196f48d1c2221e7e21a740e59c872c51, NAME => 'GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:18,732 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:18,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:18,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:18,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:18,734 INFO [StoreOpener-196f48d1c2221e7e21a740e59c872c51-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:18,735 DEBUG [StoreOpener-196f48d1c2221e7e21a740e59c872c51-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51/f 2023-07-18 07:15:18,736 DEBUG [StoreOpener-196f48d1c2221e7e21a740e59c872c51-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51/f 2023-07-18 07:15:18,736 INFO [StoreOpener-196f48d1c2221e7e21a740e59c872c51-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 196f48d1c2221e7e21a740e59c872c51 columnFamilyName f 2023-07-18 07:15:18,737 INFO [StoreOpener-196f48d1c2221e7e21a740e59c872c51-1] regionserver.HStore(310): Store=196f48d1c2221e7e21a740e59c872c51/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:18,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:18,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:18,743 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:18,744 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 196f48d1c2221e7e21a740e59c872c51; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11735212480, jitterRate=0.09292683005332947}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:18,744 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 196f48d1c2221e7e21a740e59c872c51: 2023-07-18 07:15:18,745 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51., pid=105, masterSystemTime=1689664518728 2023-07-18 07:15:18,748 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:18,748 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:18,748 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:18,748 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=196f48d1c2221e7e21a740e59c872c51, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:18,748 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ba078268ca43ed8d8506ec5f3f00bb7b, NAME => 'GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:18,748 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664518748"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664518748"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664518748"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664518748"}]},"ts":"1689664518748"} 2023-07-18 07:15:18,748 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:18,749 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:18,749 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:18,749 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:18,751 INFO [StoreOpener-ba078268ca43ed8d8506ec5f3f00bb7b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:18,754 DEBUG [StoreOpener-ba078268ca43ed8d8506ec5f3f00bb7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b/f 2023-07-18 07:15:18,754 DEBUG [StoreOpener-ba078268ca43ed8d8506ec5f3f00bb7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b/f 2023-07-18 07:15:18,754 INFO [StoreOpener-ba078268ca43ed8d8506ec5f3f00bb7b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ba078268ca43ed8d8506ec5f3f00bb7b columnFamilyName f 2023-07-18 07:15:18,755 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=101 2023-07-18 07:15:18,755 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=101, state=SUCCESS; OpenRegionProcedure 196f48d1c2221e7e21a740e59c872c51, server=jenkins-hbase4.apache.org,33769,1689664501155 in 174 msec 2023-07-18 07:15:18,756 INFO [StoreOpener-ba078268ca43ed8d8506ec5f3f00bb7b-1] regionserver.HStore(310): Store=ba078268ca43ed8d8506ec5f3f00bb7b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:18,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:18,759 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=196f48d1c2221e7e21a740e59c872c51, REOPEN/MOVE in 509 msec 2023-07-18 07:15:18,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:18,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:18,767 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ba078268ca43ed8d8506ec5f3f00bb7b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10479411840, jitterRate=-0.0240287184715271}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:18,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ba078268ca43ed8d8506ec5f3f00bb7b: 2023-07-18 07:15:18,768 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b., pid=104, masterSystemTime=1689664518728 2023-07-18 07:15:18,770 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:18,772 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:18,773 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=ba078268ca43ed8d8506ec5f3f00bb7b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:18,773 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664518773"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664518773"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664518773"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664518773"}]},"ts":"1689664518773"} 2023-07-18 07:15:18,777 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=100 2023-07-18 07:15:18,778 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=100, state=SUCCESS; OpenRegionProcedure ba078268ca43ed8d8506ec5f3f00bb7b, server=jenkins-hbase4.apache.org,33769,1689664501155 in 199 msec 2023-07-18 07:15:18,779 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ba078268ca43ed8d8506ec5f3f00bb7b, REOPEN/MOVE in 536 msec 2023-07-18 07:15:19,038 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 07:15:19,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-18 07:15:19,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_824962729. 2023-07-18 07:15:19,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:19,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:19,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:19,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-18 07:15:19,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:19,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-18 07:15:19,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:19,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:19,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:19,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_824962729 2023-07-18 07:15:19,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:19,259 INFO [Listener at localhost/33473] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-18 07:15:19,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-18 07:15:19,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 07:15:19,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-18 07:15:19,264 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664519264"}]},"ts":"1689664519264"} 2023-07-18 07:15:19,265 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-18 07:15:19,267 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-18 07:15:19,270 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=196f48d1c2221e7e21a740e59c872c51, UNASSIGN}] 2023-07-18 07:15:19,271 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=196f48d1c2221e7e21a740e59c872c51, UNASSIGN 2023-07-18 07:15:19,272 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=196f48d1c2221e7e21a740e59c872c51, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:19,272 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664519272"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664519272"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664519272"}]},"ts":"1689664519272"} 2023-07-18 07:15:19,274 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure 196f48d1c2221e7e21a740e59c872c51, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:19,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-18 07:15:19,425 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:19,427 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 196f48d1c2221e7e21a740e59c872c51, disabling compactions & flushes 2023-07-18 07:15:19,427 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:19,427 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:19,427 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. after waiting 0 ms 2023-07-18 07:15:19,427 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:19,432 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 07:15:19,434 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51. 2023-07-18 07:15:19,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 196f48d1c2221e7e21a740e59c872c51: 2023-07-18 07:15:19,436 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:19,440 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=196f48d1c2221e7e21a740e59c872c51, regionState=CLOSED 2023-07-18 07:15:19,441 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664519440"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664519440"}]},"ts":"1689664519440"} 2023-07-18 07:15:19,445 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-18 07:15:19,445 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure 196f48d1c2221e7e21a740e59c872c51, server=jenkins-hbase4.apache.org,33769,1689664501155 in 168 msec 2023-07-18 07:15:19,447 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-18 07:15:19,448 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=196f48d1c2221e7e21a740e59c872c51, UNASSIGN in 178 msec 2023-07-18 07:15:19,449 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664519449"}]},"ts":"1689664519449"} 2023-07-18 07:15:19,450 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-18 07:15:19,452 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-18 07:15:19,455 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 193 msec 2023-07-18 07:15:19,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-18 07:15:19,566 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-18 07:15:19,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-18 07:15:19,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 07:15:19,569 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 07:15:19,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_824962729' 2023-07-18 07:15:19,570 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 07:15:19,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_824962729 2023-07-18 07:15:19,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:19,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:19,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:19,575 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:19,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-18 07:15:19,577 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51/recovered.edits] 2023-07-18 07:15:19,582 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51/recovered.edits/7.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51/recovered.edits/7.seqid 2023-07-18 07:15:19,583 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveA/196f48d1c2221e7e21a740e59c872c51 2023-07-18 07:15:19,583 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-18 07:15:19,586 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 07:15:19,588 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-18 07:15:19,590 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-18 07:15:19,591 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 07:15:19,591 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-18 07:15:19,591 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664519591"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:19,592 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 07:15:19,593 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 196f48d1c2221e7e21a740e59c872c51, NAME => 'GrouptestMultiTableMoveA,,1689664516417.196f48d1c2221e7e21a740e59c872c51.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 07:15:19,593 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-18 07:15:19,593 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689664519593"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:19,594 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-18 07:15:19,596 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-18 07:15:19,597 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 29 msec 2023-07-18 07:15:19,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-18 07:15:19,678 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-18 07:15:19,678 INFO [Listener at localhost/33473] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-18 07:15:19,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-18 07:15:19,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 07:15:19,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-18 07:15:19,683 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664519682"}]},"ts":"1689664519682"} 2023-07-18 07:15:19,684 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-18 07:15:19,686 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-18 07:15:19,686 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ba078268ca43ed8d8506ec5f3f00bb7b, UNASSIGN}] 2023-07-18 07:15:19,688 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ba078268ca43ed8d8506ec5f3f00bb7b, UNASSIGN 2023-07-18 07:15:19,689 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=ba078268ca43ed8d8506ec5f3f00bb7b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:19,689 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664519689"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664519689"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664519689"}]},"ts":"1689664519689"} 2023-07-18 07:15:19,690 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure ba078268ca43ed8d8506ec5f3f00bb7b, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:19,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-18 07:15:19,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:19,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ba078268ca43ed8d8506ec5f3f00bb7b, disabling compactions & flushes 2023-07-18 07:15:19,844 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:19,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:19,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. after waiting 0 ms 2023-07-18 07:15:19,844 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:19,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 07:15:19,854 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b. 2023-07-18 07:15:19,854 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ba078268ca43ed8d8506ec5f3f00bb7b: 2023-07-18 07:15:19,856 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:19,860 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=ba078268ca43ed8d8506ec5f3f00bb7b, regionState=CLOSED 2023-07-18 07:15:19,861 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689664519860"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664519860"}]},"ts":"1689664519860"} 2023-07-18 07:15:19,866 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-18 07:15:19,867 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure ba078268ca43ed8d8506ec5f3f00bb7b, server=jenkins-hbase4.apache.org,33769,1689664501155 in 172 msec 2023-07-18 07:15:19,871 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-18 07:15:19,871 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=ba078268ca43ed8d8506ec5f3f00bb7b, UNASSIGN in 181 msec 2023-07-18 07:15:19,871 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664519871"}]},"ts":"1689664519871"} 2023-07-18 07:15:19,874 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-18 07:15:19,876 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-18 07:15:19,888 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 205 msec 2023-07-18 07:15:19,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-18 07:15:19,985 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-18 07:15:19,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-18 07:15:19,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 07:15:19,991 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 07:15:19,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_824962729' 2023-07-18 07:15:19,992 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 07:15:19,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_824962729 2023-07-18 07:15:19,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:19,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:19,997 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:19,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:19,999 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b/recovered.edits] 2023-07-18 07:15:20,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-18 07:15:20,012 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b/recovered.edits/7.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b/recovered.edits/7.seqid 2023-07-18 07:15:20,013 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/GrouptestMultiTableMoveB/ba078268ca43ed8d8506ec5f3f00bb7b 2023-07-18 07:15:20,013 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-18 07:15:20,016 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 07:15:20,021 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-18 07:15:20,024 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-18 07:15:20,025 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 07:15:20,025 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-18 07:15:20,025 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664520025"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:20,027 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 07:15:20,027 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ba078268ca43ed8d8506ec5f3f00bb7b, NAME => 'GrouptestMultiTableMoveB,,1689664517532.ba078268ca43ed8d8506ec5f3f00bb7b.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 07:15:20,027 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-18 07:15:20,027 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689664520027"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:20,029 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-18 07:15:20,037 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-18 07:15:20,038 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 50 msec 2023-07-18 07:15:20,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-18 07:15:20,114 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-18 07:15:20,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:20,120 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:20,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:20,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33769] to rsgroup default 2023-07-18 07:15:20,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_824962729 2023-07-18 07:15:20,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:20,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_824962729, current retry=0 2023-07-18 07:15:20,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33769,1689664501155] are moved back to Group_testMultiTableMove_824962729 2023-07-18 07:15:20,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_824962729 => default 2023-07-18 07:15:20,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:20,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_824962729 2023-07-18 07:15:20,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 07:15:20,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:20,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:20,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:20,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:20,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:20,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:20,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:20,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:20,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:20,146 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:20,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:20,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:20,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:20,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:20,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:20,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 510 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52448 deadline: 1689665720157, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:20,158 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:20,160 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:20,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,161 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:20,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:20,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:20,181 INFO [Listener at localhost/33473] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=510 (was 512), OpenFileDescriptor=785 (was 788), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=482 (was 490), ProcessCount=174 (was 174), AvailableMemoryMB=2618 (was 2865) 2023-07-18 07:15:20,181 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-18 07:15:20,199 INFO [Listener at localhost/33473] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=510, OpenFileDescriptor=785, MaxFileDescriptor=60000, SystemLoadAverage=482, ProcessCount=174, AvailableMemoryMB=2617 2023-07-18 07:15:20,199 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-18 07:15:20,200 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-18 07:15:20,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:20,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:20,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:20,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:20,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:20,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:20,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:20,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:20,220 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:20,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:20,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:20,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:20,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:20,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:20,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 538 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52448 deadline: 1689665720233, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:20,234 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:20,235 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:20,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,236 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:20,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:20,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:20,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:20,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:20,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-18 07:15:20,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 07:15:20,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:20,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:20,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:33769] to rsgroup oldGroup 2023-07-18 07:15:20,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 07:15:20,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:20,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 07:15:20,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33769,1689664501155, jenkins-hbase4.apache.org,39465,1689664501221] are moved back to default 2023-07-18 07:15:20,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-18 07:15:20,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:20,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-18 07:15:20,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:20,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-18 07:15:20,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:20,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:20,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:20,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-18 07:15:20,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 07:15:20,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 07:15:20,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 07:15:20,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:20,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41293] to rsgroup anotherRSGroup 2023-07-18 07:15:20,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 07:15:20,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 07:15:20,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 07:15:20,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 07:15:20,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41293,1689664501013] are moved back to default 2023-07-18 07:15:20,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-18 07:15:20,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:20,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-18 07:15:20,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:20,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-18 07:15:20,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:20,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-18 07:15:20,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:20,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:52448 deadline: 1689665720305, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-18 07:15:20,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-18 07:15:20,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:20,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:52448 deadline: 1689665720308, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-18 07:15:20,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-18 07:15:20,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:20,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:52448 deadline: 1689665720309, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-18 07:15:20,311 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-18 07:15:20,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:20,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 578 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:52448 deadline: 1689665720311, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-18 07:15:20,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:20,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:20,318 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:20,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41293] to rsgroup default 2023-07-18 07:15:20,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-18 07:15:20,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 07:15:20,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 07:15:20,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-18 07:15:20,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41293,1689664501013] are moved back to anotherRSGroup 2023-07-18 07:15:20,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-18 07:15:20,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:20,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-18 07:15:20,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 07:15:20,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 07:15:20,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:20,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:20,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:20,337 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:20,338 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:33769] to rsgroup default 2023-07-18 07:15:20,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-18 07:15:20,341 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:20,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-18 07:15:20,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33769,1689664501155, jenkins-hbase4.apache.org,39465,1689664501221] are moved back to oldGroup 2023-07-18 07:15:20,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-18 07:15:20,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:20,344 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-18 07:15:20,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,348 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 07:15:20,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:20,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:20,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:20,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:20,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:20,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:20,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:20,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:20,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:20,360 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:20,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:20,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:20,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:20,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:20,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:20,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 614 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52448 deadline: 1689665720370, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:20,371 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:20,372 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:20,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,373 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:20,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:20,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:20,391 INFO [Listener at localhost/33473] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=514 (was 510) Potentially hanging thread: hconnection-0x320932da-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x320932da-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x320932da-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x320932da-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=785 (was 785), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=482 (was 482), ProcessCount=174 (was 174), AvailableMemoryMB=2616 (was 2617) 2023-07-18 07:15:20,391 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-18 07:15:20,408 INFO [Listener at localhost/33473] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=514, OpenFileDescriptor=785, MaxFileDescriptor=60000, SystemLoadAverage=482, ProcessCount=174, AvailableMemoryMB=2616 2023-07-18 07:15:20,408 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-18 07:15:20,408 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-18 07:15:20,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:20,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:20,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:20,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:20,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:20,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:20,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:20,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:20,424 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:20,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:20,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,427 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:20,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:20,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:20,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:20,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 642 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52448 deadline: 1689665720433, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:20,433 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:20,435 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:20,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,436 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:20,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:20,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:20,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:20,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:20,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-18 07:15:20,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 07:15:20,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:20,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:20,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:33769] to rsgroup oldgroup 2023-07-18 07:15:20,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 07:15:20,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:20,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 07:15:20,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33769,1689664501155, jenkins-hbase4.apache.org,39465,1689664501221] are moved back to default 2023-07-18 07:15:20,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-18 07:15:20,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:20,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:20,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:20,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-18 07:15:20,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:20,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:20,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-18 07:15:20,471 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:20,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-18 07:15:20,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-18 07:15:20,473 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 07:15:20,474 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:20,474 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:20,475 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:20,477 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 07:15:20,479 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:20,479 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5 empty. 2023-07-18 07:15:20,480 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:20,480 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-18 07:15:20,494 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:20,495 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7a6d703275ea82872b35fb13c8326bf5, NAME => 'testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:20,507 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:20,507 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 7a6d703275ea82872b35fb13c8326bf5, disabling compactions & flushes 2023-07-18 07:15:20,507 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:20,507 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:20,507 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. after waiting 0 ms 2023-07-18 07:15:20,507 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:20,507 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:20,507 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 7a6d703275ea82872b35fb13c8326bf5: 2023-07-18 07:15:20,509 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 07:15:20,510 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689664520510"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664520510"}]},"ts":"1689664520510"} 2023-07-18 07:15:20,511 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 07:15:20,512 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 07:15:20,512 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664520512"}]},"ts":"1689664520512"} 2023-07-18 07:15:20,513 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-18 07:15:20,517 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:20,518 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:20,518 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:20,518 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:20,518 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=7a6d703275ea82872b35fb13c8326bf5, ASSIGN}] 2023-07-18 07:15:20,519 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=7a6d703275ea82872b35fb13c8326bf5, ASSIGN 2023-07-18 07:15:20,520 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=7a6d703275ea82872b35fb13c8326bf5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41293,1689664501013; forceNewPlan=false, retain=false 2023-07-18 07:15:20,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-18 07:15:20,671 INFO [jenkins-hbase4:33141] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 07:15:20,672 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=7a6d703275ea82872b35fb13c8326bf5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:20,672 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689664520672"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664520672"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664520672"}]},"ts":"1689664520672"} 2023-07-18 07:15:20,674 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure 7a6d703275ea82872b35fb13c8326bf5, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:20,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-18 07:15:20,829 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:20,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7a6d703275ea82872b35fb13c8326bf5, NAME => 'testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:20,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:20,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:20,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:20,829 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:20,831 INFO [StoreOpener-7a6d703275ea82872b35fb13c8326bf5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:20,832 DEBUG [StoreOpener-7a6d703275ea82872b35fb13c8326bf5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5/tr 2023-07-18 07:15:20,832 DEBUG [StoreOpener-7a6d703275ea82872b35fb13c8326bf5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5/tr 2023-07-18 07:15:20,833 INFO [StoreOpener-7a6d703275ea82872b35fb13c8326bf5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7a6d703275ea82872b35fb13c8326bf5 columnFamilyName tr 2023-07-18 07:15:20,833 INFO [StoreOpener-7a6d703275ea82872b35fb13c8326bf5-1] regionserver.HStore(310): Store=7a6d703275ea82872b35fb13c8326bf5/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:20,834 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:20,834 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:20,837 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:20,839 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:20,840 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7a6d703275ea82872b35fb13c8326bf5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10932990880, jitterRate=0.018214121460914612}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:20,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7a6d703275ea82872b35fb13c8326bf5: 2023-07-18 07:15:20,841 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5., pid=116, masterSystemTime=1689664520825 2023-07-18 07:15:20,842 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:20,842 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:20,843 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=7a6d703275ea82872b35fb13c8326bf5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:20,843 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689664520843"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664520843"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664520843"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664520843"}]},"ts":"1689664520843"} 2023-07-18 07:15:20,846 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-18 07:15:20,846 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure 7a6d703275ea82872b35fb13c8326bf5, server=jenkins-hbase4.apache.org,41293,1689664501013 in 170 msec 2023-07-18 07:15:20,857 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-18 07:15:20,857 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=7a6d703275ea82872b35fb13c8326bf5, ASSIGN in 328 msec 2023-07-18 07:15:20,858 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 07:15:20,858 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664520858"}]},"ts":"1689664520858"} 2023-07-18 07:15:20,859 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-18 07:15:20,862 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 07:15:20,863 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 393 msec 2023-07-18 07:15:21,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-18 07:15:21,076 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-18 07:15:21,076 DEBUG [Listener at localhost/33473] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-18 07:15:21,077 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:21,082 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-18 07:15:21,083 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:21,083 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-18 07:15:21,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-18 07:15:21,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 07:15:21,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:21,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:21,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:21,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-18 07:15:21,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(345): Moving region 7a6d703275ea82872b35fb13c8326bf5 to RSGroup oldgroup 2023-07-18 07:15:21,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:21,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:21,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:21,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:21,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:21,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=7a6d703275ea82872b35fb13c8326bf5, REOPEN/MOVE 2023-07-18 07:15:21,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-18 07:15:21,093 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=7a6d703275ea82872b35fb13c8326bf5, REOPEN/MOVE 2023-07-18 07:15:21,094 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=7a6d703275ea82872b35fb13c8326bf5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:21,094 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689664521094"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664521094"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664521094"}]},"ts":"1689664521094"} 2023-07-18 07:15:21,095 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure 7a6d703275ea82872b35fb13c8326bf5, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:21,180 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-18 07:15:21,248 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:21,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7a6d703275ea82872b35fb13c8326bf5, disabling compactions & flushes 2023-07-18 07:15:21,250 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:21,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:21,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. after waiting 0 ms 2023-07-18 07:15:21,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:21,254 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:21,254 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:21,254 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7a6d703275ea82872b35fb13c8326bf5: 2023-07-18 07:15:21,254 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7a6d703275ea82872b35fb13c8326bf5 move to jenkins-hbase4.apache.org,33769,1689664501155 record at close sequenceid=2 2023-07-18 07:15:21,256 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:21,256 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=7a6d703275ea82872b35fb13c8326bf5, regionState=CLOSED 2023-07-18 07:15:21,256 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689664521256"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664521256"}]},"ts":"1689664521256"} 2023-07-18 07:15:21,259 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-18 07:15:21,259 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure 7a6d703275ea82872b35fb13c8326bf5, server=jenkins-hbase4.apache.org,41293,1689664501013 in 163 msec 2023-07-18 07:15:21,259 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=7a6d703275ea82872b35fb13c8326bf5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33769,1689664501155; forceNewPlan=false, retain=false 2023-07-18 07:15:21,410 INFO [jenkins-hbase4:33141] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 07:15:21,410 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=7a6d703275ea82872b35fb13c8326bf5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:21,410 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689664521410"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664521410"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664521410"}]},"ts":"1689664521410"} 2023-07-18 07:15:21,412 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure 7a6d703275ea82872b35fb13c8326bf5, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:21,573 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:21,573 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7a6d703275ea82872b35fb13c8326bf5, NAME => 'testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:21,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:21,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:21,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:21,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:21,576 INFO [StoreOpener-7a6d703275ea82872b35fb13c8326bf5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:21,577 DEBUG [StoreOpener-7a6d703275ea82872b35fb13c8326bf5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5/tr 2023-07-18 07:15:21,577 DEBUG [StoreOpener-7a6d703275ea82872b35fb13c8326bf5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5/tr 2023-07-18 07:15:21,578 INFO [StoreOpener-7a6d703275ea82872b35fb13c8326bf5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7a6d703275ea82872b35fb13c8326bf5 columnFamilyName tr 2023-07-18 07:15:21,578 INFO [StoreOpener-7a6d703275ea82872b35fb13c8326bf5-1] regionserver.HStore(310): Store=7a6d703275ea82872b35fb13c8326bf5/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:21,579 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:21,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:21,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:21,586 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7a6d703275ea82872b35fb13c8326bf5; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10782716800, jitterRate=0.004218757152557373}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:21,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7a6d703275ea82872b35fb13c8326bf5: 2023-07-18 07:15:21,587 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5., pid=119, masterSystemTime=1689664521568 2023-07-18 07:15:21,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:21,589 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:21,590 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=7a6d703275ea82872b35fb13c8326bf5, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:21,590 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689664521590"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664521590"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664521590"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664521590"}]},"ts":"1689664521590"} 2023-07-18 07:15:21,594 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-18 07:15:21,594 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure 7a6d703275ea82872b35fb13c8326bf5, server=jenkins-hbase4.apache.org,33769,1689664501155 in 180 msec 2023-07-18 07:15:21,595 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=7a6d703275ea82872b35fb13c8326bf5, REOPEN/MOVE in 502 msec 2023-07-18 07:15:22,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-18 07:15:22,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-18 07:15:22,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:22,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:22,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:22,100 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:22,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 07:15:22,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:22,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-18 07:15:22,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:22,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 07:15:22,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:22,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:22,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:22,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-18 07:15:22,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 07:15:22,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 07:15:22,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:22,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:22,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 07:15:22,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:22,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:22,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:22,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41293] to rsgroup normal 2023-07-18 07:15:22,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 07:15:22,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 07:15:22,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:22,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:22,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 07:15:22,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 07:15:22,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41293,1689664501013] are moved back to default 2023-07-18 07:15:22,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-18 07:15:22,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:22,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:22,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:22,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-18 07:15:22,128 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:22,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:22,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-18 07:15:22,133 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:22,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-18 07:15:22,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-18 07:15:22,135 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 07:15:22,135 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 07:15:22,135 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:22,136 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:22,136 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 07:15:22,138 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 07:15:22,140 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:22,141 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f empty. 2023-07-18 07:15:22,141 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:22,141 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-18 07:15:22,160 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:22,161 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => a9acafa34199e073a03f352ac1189b9f, NAME => 'unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:22,171 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:22,172 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing a9acafa34199e073a03f352ac1189b9f, disabling compactions & flushes 2023-07-18 07:15:22,172 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:22,172 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:22,172 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. after waiting 0 ms 2023-07-18 07:15:22,172 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:22,172 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:22,172 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for a9acafa34199e073a03f352ac1189b9f: 2023-07-18 07:15:22,174 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 07:15:22,175 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689664522175"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664522175"}]},"ts":"1689664522175"} 2023-07-18 07:15:22,176 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 07:15:22,177 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 07:15:22,177 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664522177"}]},"ts":"1689664522177"} 2023-07-18 07:15:22,178 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-18 07:15:22,188 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=a9acafa34199e073a03f352ac1189b9f, ASSIGN}] 2023-07-18 07:15:22,190 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=a9acafa34199e073a03f352ac1189b9f, ASSIGN 2023-07-18 07:15:22,190 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=a9acafa34199e073a03f352ac1189b9f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42375,1689664504791; forceNewPlan=false, retain=false 2023-07-18 07:15:22,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-18 07:15:22,342 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=a9acafa34199e073a03f352ac1189b9f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:22,342 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689664522342"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664522342"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664522342"}]},"ts":"1689664522342"} 2023-07-18 07:15:22,344 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure a9acafa34199e073a03f352ac1189b9f, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:22,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-18 07:15:22,499 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:22,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a9acafa34199e073a03f352ac1189b9f, NAME => 'unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:22,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:22,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:22,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:22,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:22,501 INFO [StoreOpener-a9acafa34199e073a03f352ac1189b9f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:22,503 DEBUG [StoreOpener-a9acafa34199e073a03f352ac1189b9f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f/ut 2023-07-18 07:15:22,503 DEBUG [StoreOpener-a9acafa34199e073a03f352ac1189b9f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f/ut 2023-07-18 07:15:22,503 INFO [StoreOpener-a9acafa34199e073a03f352ac1189b9f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a9acafa34199e073a03f352ac1189b9f columnFamilyName ut 2023-07-18 07:15:22,504 INFO [StoreOpener-a9acafa34199e073a03f352ac1189b9f-1] regionserver.HStore(310): Store=a9acafa34199e073a03f352ac1189b9f/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:22,504 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:22,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:22,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:22,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:22,510 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a9acafa34199e073a03f352ac1189b9f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9517090400, jitterRate=-0.11365188658237457}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:22,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a9acafa34199e073a03f352ac1189b9f: 2023-07-18 07:15:22,511 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f., pid=122, masterSystemTime=1689664522495 2023-07-18 07:15:22,512 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:22,512 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:22,513 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=a9acafa34199e073a03f352ac1189b9f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:22,513 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689664522513"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664522513"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664522513"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664522513"}]},"ts":"1689664522513"} 2023-07-18 07:15:22,516 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-18 07:15:22,516 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure a9acafa34199e073a03f352ac1189b9f, server=jenkins-hbase4.apache.org,42375,1689664504791 in 171 msec 2023-07-18 07:15:22,517 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-18 07:15:22,518 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=a9acafa34199e073a03f352ac1189b9f, ASSIGN in 329 msec 2023-07-18 07:15:22,518 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 07:15:22,518 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664522518"}]},"ts":"1689664522518"} 2023-07-18 07:15:22,519 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-18 07:15:22,521 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 07:15:22,523 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 392 msec 2023-07-18 07:15:22,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-18 07:15:22,737 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-18 07:15:22,738 DEBUG [Listener at localhost/33473] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-18 07:15:22,738 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:22,741 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-18 07:15:22,741 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:22,742 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-18 07:15:22,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-18 07:15:22,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-18 07:15:22,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 07:15:22,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:22,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:22,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 07:15:22,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-18 07:15:22,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(345): Moving region a9acafa34199e073a03f352ac1189b9f to RSGroup normal 2023-07-18 07:15:22,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=a9acafa34199e073a03f352ac1189b9f, REOPEN/MOVE 2023-07-18 07:15:22,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-18 07:15:22,750 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=a9acafa34199e073a03f352ac1189b9f, REOPEN/MOVE 2023-07-18 07:15:22,750 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=a9acafa34199e073a03f352ac1189b9f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:22,750 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689664522750"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664522750"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664522750"}]},"ts":"1689664522750"} 2023-07-18 07:15:22,752 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure a9acafa34199e073a03f352ac1189b9f, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:22,905 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:22,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a9acafa34199e073a03f352ac1189b9f, disabling compactions & flushes 2023-07-18 07:15:22,906 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:22,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:22,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. after waiting 0 ms 2023-07-18 07:15:22,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:22,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:22,911 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:22,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a9acafa34199e073a03f352ac1189b9f: 2023-07-18 07:15:22,911 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a9acafa34199e073a03f352ac1189b9f move to jenkins-hbase4.apache.org,41293,1689664501013 record at close sequenceid=2 2023-07-18 07:15:22,913 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:22,913 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=a9acafa34199e073a03f352ac1189b9f, regionState=CLOSED 2023-07-18 07:15:22,914 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689664522913"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664522913"}]},"ts":"1689664522913"} 2023-07-18 07:15:22,917 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-18 07:15:22,917 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure a9acafa34199e073a03f352ac1189b9f, server=jenkins-hbase4.apache.org,42375,1689664504791 in 163 msec 2023-07-18 07:15:22,917 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=a9acafa34199e073a03f352ac1189b9f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41293,1689664501013; forceNewPlan=false, retain=false 2023-07-18 07:15:23,068 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=a9acafa34199e073a03f352ac1189b9f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:23,068 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689664523068"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664523068"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664523068"}]},"ts":"1689664523068"} 2023-07-18 07:15:23,070 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure a9acafa34199e073a03f352ac1189b9f, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:23,225 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:23,225 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a9acafa34199e073a03f352ac1189b9f, NAME => 'unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:23,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:23,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:23,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:23,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:23,227 INFO [StoreOpener-a9acafa34199e073a03f352ac1189b9f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:23,228 DEBUG [StoreOpener-a9acafa34199e073a03f352ac1189b9f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f/ut 2023-07-18 07:15:23,228 DEBUG [StoreOpener-a9acafa34199e073a03f352ac1189b9f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f/ut 2023-07-18 07:15:23,228 INFO [StoreOpener-a9acafa34199e073a03f352ac1189b9f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a9acafa34199e073a03f352ac1189b9f columnFamilyName ut 2023-07-18 07:15:23,229 INFO [StoreOpener-a9acafa34199e073a03f352ac1189b9f-1] regionserver.HStore(310): Store=a9acafa34199e073a03f352ac1189b9f/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:23,230 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:23,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:23,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:23,236 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a9acafa34199e073a03f352ac1189b9f; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11894681120, jitterRate=0.10777850449085236}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:23,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a9acafa34199e073a03f352ac1189b9f: 2023-07-18 07:15:23,239 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f., pid=125, masterSystemTime=1689664523221 2023-07-18 07:15:23,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:23,240 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:23,241 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=a9acafa34199e073a03f352ac1189b9f, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:23,241 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689664523241"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664523241"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664523241"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664523241"}]},"ts":"1689664523241"} 2023-07-18 07:15:23,249 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-18 07:15:23,249 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure a9acafa34199e073a03f352ac1189b9f, server=jenkins-hbase4.apache.org,41293,1689664501013 in 177 msec 2023-07-18 07:15:23,250 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=a9acafa34199e073a03f352ac1189b9f, REOPEN/MOVE in 500 msec 2023-07-18 07:15:23,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-18 07:15:23,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-18 07:15:23,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:23,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:23,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:23,760 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:23,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 07:15:23,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:23,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-18 07:15:23,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:23,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 07:15:23,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:23,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-18 07:15:23,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 07:15:23,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:23,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:23,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 07:15:23,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-18 07:15:23,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-18 07:15:23,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:23,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:23,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-18 07:15:23,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:23,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-18 07:15:23,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:23,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-18 07:15:23,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:23,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:23,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:23,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-18 07:15:23,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 07:15:23,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:23,803 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:23,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 07:15:23,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 07:15:23,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-18 07:15:23,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(345): Moving region a9acafa34199e073a03f352ac1189b9f to RSGroup default 2023-07-18 07:15:23,810 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=a9acafa34199e073a03f352ac1189b9f, REOPEN/MOVE 2023-07-18 07:15:23,811 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=a9acafa34199e073a03f352ac1189b9f, REOPEN/MOVE 2023-07-18 07:15:23,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 07:15:23,811 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=a9acafa34199e073a03f352ac1189b9f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:23,812 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689664523811"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664523811"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664523811"}]},"ts":"1689664523811"} 2023-07-18 07:15:23,813 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure a9acafa34199e073a03f352ac1189b9f, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:23,967 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:23,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a9acafa34199e073a03f352ac1189b9f, disabling compactions & flushes 2023-07-18 07:15:23,968 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:23,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:23,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. after waiting 0 ms 2023-07-18 07:15:23,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:23,972 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 07:15:23,972 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:23,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a9acafa34199e073a03f352ac1189b9f: 2023-07-18 07:15:23,973 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a9acafa34199e073a03f352ac1189b9f move to jenkins-hbase4.apache.org,42375,1689664504791 record at close sequenceid=5 2023-07-18 07:15:23,974 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:23,975 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=a9acafa34199e073a03f352ac1189b9f, regionState=CLOSED 2023-07-18 07:15:23,975 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689664523975"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664523975"}]},"ts":"1689664523975"} 2023-07-18 07:15:23,979 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-18 07:15:23,979 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure a9acafa34199e073a03f352ac1189b9f, server=jenkins-hbase4.apache.org,41293,1689664501013 in 164 msec 2023-07-18 07:15:23,980 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=a9acafa34199e073a03f352ac1189b9f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42375,1689664504791; forceNewPlan=false, retain=false 2023-07-18 07:15:24,131 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=a9acafa34199e073a03f352ac1189b9f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:24,131 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689664524131"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664524131"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664524131"}]},"ts":"1689664524131"} 2023-07-18 07:15:24,133 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure a9acafa34199e073a03f352ac1189b9f, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:24,234 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-18 07:15:24,290 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:24,290 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a9acafa34199e073a03f352ac1189b9f, NAME => 'unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:24,290 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:24,290 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:24,290 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:24,290 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:24,292 INFO [StoreOpener-a9acafa34199e073a03f352ac1189b9f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:24,293 DEBUG [StoreOpener-a9acafa34199e073a03f352ac1189b9f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f/ut 2023-07-18 07:15:24,293 DEBUG [StoreOpener-a9acafa34199e073a03f352ac1189b9f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f/ut 2023-07-18 07:15:24,293 INFO [StoreOpener-a9acafa34199e073a03f352ac1189b9f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a9acafa34199e073a03f352ac1189b9f columnFamilyName ut 2023-07-18 07:15:24,294 INFO [StoreOpener-a9acafa34199e073a03f352ac1189b9f-1] regionserver.HStore(310): Store=a9acafa34199e073a03f352ac1189b9f/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:24,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:24,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:24,299 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:24,300 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a9acafa34199e073a03f352ac1189b9f; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11896016000, jitterRate=0.10790282487869263}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:24,300 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a9acafa34199e073a03f352ac1189b9f: 2023-07-18 07:15:24,301 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f., pid=128, masterSystemTime=1689664524285 2023-07-18 07:15:24,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:24,303 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:24,303 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=a9acafa34199e073a03f352ac1189b9f, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:24,303 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689664524303"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664524303"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664524303"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664524303"}]},"ts":"1689664524303"} 2023-07-18 07:15:24,307 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-18 07:15:24,307 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure a9acafa34199e073a03f352ac1189b9f, server=jenkins-hbase4.apache.org,42375,1689664504791 in 172 msec 2023-07-18 07:15:24,310 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=a9acafa34199e073a03f352ac1189b9f, REOPEN/MOVE in 498 msec 2023-07-18 07:15:24,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-18 07:15:24,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-18 07:15:24,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:24,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41293] to rsgroup default 2023-07-18 07:15:24,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-18 07:15:24,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:24,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:24,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 07:15:24,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 07:15:24,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-18 07:15:24,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,41293,1689664501013] are moved back to normal 2023-07-18 07:15:24,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-18 07:15:24,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:24,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-18 07:15:24,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:24,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:24,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 07:15:24,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 07:15:24,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:24,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:24,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:24,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:24,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:24,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:24,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:24,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:24,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 07:15:24,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 07:15:24,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:24,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-18 07:15:24,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:24,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 07:15:24,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:24,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-18 07:15:24,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(345): Moving region 7a6d703275ea82872b35fb13c8326bf5 to RSGroup default 2023-07-18 07:15:24,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=7a6d703275ea82872b35fb13c8326bf5, REOPEN/MOVE 2023-07-18 07:15:24,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-18 07:15:24,842 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=7a6d703275ea82872b35fb13c8326bf5, REOPEN/MOVE 2023-07-18 07:15:24,843 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=7a6d703275ea82872b35fb13c8326bf5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:24,843 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689664524843"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664524843"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664524843"}]},"ts":"1689664524843"} 2023-07-18 07:15:24,844 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 7a6d703275ea82872b35fb13c8326bf5, server=jenkins-hbase4.apache.org,33769,1689664501155}] 2023-07-18 07:15:24,997 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:24,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7a6d703275ea82872b35fb13c8326bf5, disabling compactions & flushes 2023-07-18 07:15:24,998 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:24,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:24,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. after waiting 0 ms 2023-07-18 07:15:24,998 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:25,002 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-18 07:15:25,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:25,004 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7a6d703275ea82872b35fb13c8326bf5: 2023-07-18 07:15:25,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7a6d703275ea82872b35fb13c8326bf5 move to jenkins-hbase4.apache.org,41293,1689664501013 record at close sequenceid=5 2023-07-18 07:15:25,006 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:25,006 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=7a6d703275ea82872b35fb13c8326bf5, regionState=CLOSED 2023-07-18 07:15:25,006 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689664525006"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664525006"}]},"ts":"1689664525006"} 2023-07-18 07:15:25,009 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-18 07:15:25,009 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 7a6d703275ea82872b35fb13c8326bf5, server=jenkins-hbase4.apache.org,33769,1689664501155 in 164 msec 2023-07-18 07:15:25,010 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=7a6d703275ea82872b35fb13c8326bf5, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41293,1689664501013; forceNewPlan=false, retain=false 2023-07-18 07:15:25,160 INFO [jenkins-hbase4:33141] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 07:15:25,160 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=7a6d703275ea82872b35fb13c8326bf5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:25,161 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689664525160"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664525160"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664525160"}]},"ts":"1689664525160"} 2023-07-18 07:15:25,165 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 7a6d703275ea82872b35fb13c8326bf5, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:25,321 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:25,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7a6d703275ea82872b35fb13c8326bf5, NAME => 'testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:25,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:25,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:25,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:25,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:25,323 INFO [StoreOpener-7a6d703275ea82872b35fb13c8326bf5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:25,324 DEBUG [StoreOpener-7a6d703275ea82872b35fb13c8326bf5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5/tr 2023-07-18 07:15:25,324 DEBUG [StoreOpener-7a6d703275ea82872b35fb13c8326bf5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5/tr 2023-07-18 07:15:25,325 INFO [StoreOpener-7a6d703275ea82872b35fb13c8326bf5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7a6d703275ea82872b35fb13c8326bf5 columnFamilyName tr 2023-07-18 07:15:25,325 INFO [StoreOpener-7a6d703275ea82872b35fb13c8326bf5-1] regionserver.HStore(310): Store=7a6d703275ea82872b35fb13c8326bf5/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:25,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:25,328 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:25,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:25,332 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7a6d703275ea82872b35fb13c8326bf5; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11897852320, jitterRate=0.10807384550571442}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:25,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7a6d703275ea82872b35fb13c8326bf5: 2023-07-18 07:15:25,333 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5., pid=131, masterSystemTime=1689664525317 2023-07-18 07:15:25,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:25,335 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:25,335 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=7a6d703275ea82872b35fb13c8326bf5, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:25,335 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689664525335"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664525335"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664525335"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664525335"}]},"ts":"1689664525335"} 2023-07-18 07:15:25,338 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-18 07:15:25,338 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 7a6d703275ea82872b35fb13c8326bf5, server=jenkins-hbase4.apache.org,41293,1689664501013 in 172 msec 2023-07-18 07:15:25,339 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=7a6d703275ea82872b35fb13c8326bf5, REOPEN/MOVE in 497 msec 2023-07-18 07:15:25,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-18 07:15:25,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-18 07:15:25,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:25,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:33769] to rsgroup default 2023-07-18 07:15:25,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:25,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-18 07:15:25,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:25,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-18 07:15:25,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33769,1689664501155, jenkins-hbase4.apache.org,39465,1689664501221] are moved back to newgroup 2023-07-18 07:15:25,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-18 07:15:25,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:25,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-18 07:15:25,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:25,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:25,862 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:25,866 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:25,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:25,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:25,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:25,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:25,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:25,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:25,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:25,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:25,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:25,881 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 762 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52448 deadline: 1689665725880, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:25,881 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:25,883 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:25,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:25,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:25,884 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:25,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:25,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:25,902 INFO [Listener at localhost/33473] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=507 (was 514), OpenFileDescriptor=771 (was 785), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=451 (was 482), ProcessCount=174 (was 174), AvailableMemoryMB=2428 (was 2616) 2023-07-18 07:15:25,902 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-18 07:15:25,920 INFO [Listener at localhost/33473] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=507, OpenFileDescriptor=771, MaxFileDescriptor=60000, SystemLoadAverage=451, ProcessCount=174, AvailableMemoryMB=2427 2023-07-18 07:15:25,920 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-18 07:15:25,921 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-18 07:15:25,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:25,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:25,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:25,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:25,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:25,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:25,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:25,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:25,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:25,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:25,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:25,938 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:25,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:25,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:25,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:25,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:25,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:25,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:25,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:25,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:25,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:25,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 790 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52448 deadline: 1689665725950, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:25,950 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:25,952 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:25,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:25,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:25,953 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:25,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:25,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:25,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-18 07:15:25,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:25,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-18 07:15:25,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-18 07:15:25,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-18 07:15:25,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:25,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-18 07:15:25,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:25,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 802 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:52448 deadline: 1689665725963, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-18 07:15:25,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-18 07:15:25,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:25,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 805 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:52448 deadline: 1689665725965, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-18 07:15:25,969 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-18 07:15:25,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-18 07:15:25,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-18 07:15:25,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:25,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 809 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:52448 deadline: 1689665725975, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-18 07:15:25,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:25,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:25,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:25,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:25,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:25,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:25,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:25,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:25,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:25,986 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:25,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:25,990 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:25,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:25,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:25,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:25,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:25,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:25,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:25,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:26,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:26,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:26,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 833 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52448 deadline: 1689665726003, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:26,007 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:26,008 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:26,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:26,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:26,009 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:26,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:26,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:26,027 INFO [Listener at localhost/33473] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=511 (was 507) Potentially hanging thread: hconnection-0x320932da-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xb2b369f-shared-pool-30 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x320932da-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xb2b369f-shared-pool-29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=771 (was 771), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=451 (was 451), ProcessCount=174 (was 174), AvailableMemoryMB=2426 (was 2427) 2023-07-18 07:15:26,027 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-18 07:15:26,045 INFO [Listener at localhost/33473] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=511, OpenFileDescriptor=771, MaxFileDescriptor=60000, SystemLoadAverage=451, ProcessCount=174, AvailableMemoryMB=2425 2023-07-18 07:15:26,045 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-18 07:15:26,045 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-18 07:15:26,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:26,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:26,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:26,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:26,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:26,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:26,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:26,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:26,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:26,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:26,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:26,060 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:26,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:26,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:26,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:26,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:26,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:26,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:26,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:26,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:26,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:26,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 861 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52448 deadline: 1689665726071, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:26,072 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:26,074 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:26,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:26,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:26,076 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:26,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:26,077 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:26,078 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:26,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:26,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_2080490170 2023-07-18 07:15:26,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:26,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2080490170 2023-07-18 07:15:26,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:26,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:26,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:26,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:26,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:26,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:33769] to rsgroup Group_testDisabledTableMove_2080490170 2023-07-18 07:15:26,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:26,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2080490170 2023-07-18 07:15:26,103 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:26,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:26,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-18 07:15:26,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33769,1689664501155, jenkins-hbase4.apache.org,39465,1689664501221] are moved back to default 2023-07-18 07:15:26,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_2080490170 2023-07-18 07:15:26,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:26,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:26,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:26,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_2080490170 2023-07-18 07:15:26,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:26,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:26,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-18 07:15:26,116 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:26,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-18 07:15:26,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-18 07:15:26,120 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:26,122 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2080490170 2023-07-18 07:15:26,123 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:26,123 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:26,126 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 07:15:26,130 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/a243dd04fabff6acd959589651598faf 2023-07-18 07:15:26,130 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/58d8eff1e305bbd4ac3b1e3e6df9d5e8 2023-07-18 07:15:26,130 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/1b2b3fa270afb32dce71345ca7a0697d 2023-07-18 07:15:26,130 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/17ae3169ea39138d566218973ef1ad09 2023-07-18 07:15:26,130 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/23f1ee3193a9bcd52f6f94383a6e3ff4 2023-07-18 07:15:26,131 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/23f1ee3193a9bcd52f6f94383a6e3ff4 empty. 2023-07-18 07:15:26,131 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/58d8eff1e305bbd4ac3b1e3e6df9d5e8 empty. 2023-07-18 07:15:26,132 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/17ae3169ea39138d566218973ef1ad09 empty. 2023-07-18 07:15:26,132 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/a243dd04fabff6acd959589651598faf empty. 2023-07-18 07:15:26,132 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/1b2b3fa270afb32dce71345ca7a0697d empty. 2023-07-18 07:15:26,132 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/58d8eff1e305bbd4ac3b1e3e6df9d5e8 2023-07-18 07:15:26,132 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/23f1ee3193a9bcd52f6f94383a6e3ff4 2023-07-18 07:15:26,132 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/17ae3169ea39138d566218973ef1ad09 2023-07-18 07:15:26,132 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/1b2b3fa270afb32dce71345ca7a0697d 2023-07-18 07:15:26,133 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/a243dd04fabff6acd959589651598faf 2023-07-18 07:15:26,133 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-18 07:15:26,152 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:26,154 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => a243dd04fabff6acd959589651598faf, NAME => 'Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:26,155 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 23f1ee3193a9bcd52f6f94383a6e3ff4, NAME => 'Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:26,156 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 17ae3169ea39138d566218973ef1ad09, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:26,194 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:26,195 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 17ae3169ea39138d566218973ef1ad09, disabling compactions & flushes 2023-07-18 07:15:26,195 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09. 2023-07-18 07:15:26,195 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09. 2023-07-18 07:15:26,195 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09. after waiting 0 ms 2023-07-18 07:15:26,195 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09. 2023-07-18 07:15:26,195 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09. 2023-07-18 07:15:26,195 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 17ae3169ea39138d566218973ef1ad09: 2023-07-18 07:15:26,195 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 1b2b3fa270afb32dce71345ca7a0697d, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:26,198 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:26,198 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 23f1ee3193a9bcd52f6f94383a6e3ff4, disabling compactions & flushes 2023-07-18 07:15:26,198 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4. 2023-07-18 07:15:26,198 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4. 2023-07-18 07:15:26,198 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4. after waiting 0 ms 2023-07-18 07:15:26,198 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4. 2023-07-18 07:15:26,198 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4. 2023-07-18 07:15:26,198 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 23f1ee3193a9bcd52f6f94383a6e3ff4: 2023-07-18 07:15:26,199 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 58d8eff1e305bbd4ac3b1e3e6df9d5e8, NAME => 'Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp 2023-07-18 07:15:26,211 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:26,212 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 58d8eff1e305bbd4ac3b1e3e6df9d5e8, disabling compactions & flushes 2023-07-18 07:15:26,212 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8. 2023-07-18 07:15:26,212 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8. 2023-07-18 07:15:26,212 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8. after waiting 0 ms 2023-07-18 07:15:26,212 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8. 2023-07-18 07:15:26,212 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8. 2023-07-18 07:15:26,212 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 58d8eff1e305bbd4ac3b1e3e6df9d5e8: 2023-07-18 07:15:26,215 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:26,215 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 1b2b3fa270afb32dce71345ca7a0697d, disabling compactions & flushes 2023-07-18 07:15:26,215 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d. 2023-07-18 07:15:26,215 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d. 2023-07-18 07:15:26,215 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d. after waiting 0 ms 2023-07-18 07:15:26,215 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d. 2023-07-18 07:15:26,215 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d. 2023-07-18 07:15:26,215 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 1b2b3fa270afb32dce71345ca7a0697d: 2023-07-18 07:15:26,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-18 07:15:26,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-18 07:15:26,595 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:26,595 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing a243dd04fabff6acd959589651598faf, disabling compactions & flushes 2023-07-18 07:15:26,595 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf. 2023-07-18 07:15:26,595 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf. 2023-07-18 07:15:26,595 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf. after waiting 0 ms 2023-07-18 07:15:26,595 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf. 2023-07-18 07:15:26,595 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf. 2023-07-18 07:15:26,595 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for a243dd04fabff6acd959589651598faf: 2023-07-18 07:15:26,597 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 07:15:26,598 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689664526598"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664526598"}]},"ts":"1689664526598"} 2023-07-18 07:15:26,599 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689664526598"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664526598"}]},"ts":"1689664526598"} 2023-07-18 07:15:26,599 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689664526598"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664526598"}]},"ts":"1689664526598"} 2023-07-18 07:15:26,599 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689664526598"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664526598"}]},"ts":"1689664526598"} 2023-07-18 07:15:26,599 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689664526598"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664526598"}]},"ts":"1689664526598"} 2023-07-18 07:15:26,601 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-18 07:15:26,602 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 07:15:26,602 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664526602"}]},"ts":"1689664526602"} 2023-07-18 07:15:26,603 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-18 07:15:26,606 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:26,607 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:26,607 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:26,607 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:26,607 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a243dd04fabff6acd959589651598faf, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=23f1ee3193a9bcd52f6f94383a6e3ff4, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=17ae3169ea39138d566218973ef1ad09, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b2b3fa270afb32dce71345ca7a0697d, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=58d8eff1e305bbd4ac3b1e3e6df9d5e8, ASSIGN}] 2023-07-18 07:15:26,609 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b2b3fa270afb32dce71345ca7a0697d, ASSIGN 2023-07-18 07:15:26,609 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=23f1ee3193a9bcd52f6f94383a6e3ff4, ASSIGN 2023-07-18 07:15:26,609 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=17ae3169ea39138d566218973ef1ad09, ASSIGN 2023-07-18 07:15:26,609 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a243dd04fabff6acd959589651598faf, ASSIGN 2023-07-18 07:15:26,610 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b2b3fa270afb32dce71345ca7a0697d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42375,1689664504791; forceNewPlan=false, retain=false 2023-07-18 07:15:26,610 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=58d8eff1e305bbd4ac3b1e3e6df9d5e8, ASSIGN 2023-07-18 07:15:26,610 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=17ae3169ea39138d566218973ef1ad09, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41293,1689664501013; forceNewPlan=false, retain=false 2023-07-18 07:15:26,610 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a243dd04fabff6acd959589651598faf, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41293,1689664501013; forceNewPlan=false, retain=false 2023-07-18 07:15:26,610 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=23f1ee3193a9bcd52f6f94383a6e3ff4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42375,1689664504791; forceNewPlan=false, retain=false 2023-07-18 07:15:26,611 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=58d8eff1e305bbd4ac3b1e3e6df9d5e8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41293,1689664501013; forceNewPlan=false, retain=false 2023-07-18 07:15:26,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-18 07:15:26,760 INFO [jenkins-hbase4:33141] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-18 07:15:26,763 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=a243dd04fabff6acd959589651598faf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:26,763 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=17ae3169ea39138d566218973ef1ad09, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:26,764 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689664526763"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664526763"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664526763"}]},"ts":"1689664526763"} 2023-07-18 07:15:26,763 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=1b2b3fa270afb32dce71345ca7a0697d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:26,763 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=23f1ee3193a9bcd52f6f94383a6e3ff4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:26,763 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=58d8eff1e305bbd4ac3b1e3e6df9d5e8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:26,764 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689664526763"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664526763"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664526763"}]},"ts":"1689664526763"} 2023-07-18 07:15:26,764 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689664526763"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664526763"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664526763"}]},"ts":"1689664526763"} 2023-07-18 07:15:26,764 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689664526763"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664526763"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664526763"}]},"ts":"1689664526763"} 2023-07-18 07:15:26,764 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689664526763"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664526763"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664526763"}]},"ts":"1689664526763"} 2023-07-18 07:15:26,766 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=133, state=RUNNABLE; OpenRegionProcedure a243dd04fabff6acd959589651598faf, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:26,766 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=134, state=RUNNABLE; OpenRegionProcedure 23f1ee3193a9bcd52f6f94383a6e3ff4, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:26,767 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=137, state=RUNNABLE; OpenRegionProcedure 58d8eff1e305bbd4ac3b1e3e6df9d5e8, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:26,768 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=136, state=RUNNABLE; OpenRegionProcedure 1b2b3fa270afb32dce71345ca7a0697d, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:26,769 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=135, state=RUNNABLE; OpenRegionProcedure 17ae3169ea39138d566218973ef1ad09, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:26,921 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4. 2023-07-18 07:15:26,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 23f1ee3193a9bcd52f6f94383a6e3ff4, NAME => 'Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-18 07:15:26,922 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf. 2023-07-18 07:15:26,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a243dd04fabff6acd959589651598faf, NAME => 'Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-18 07:15:26,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 23f1ee3193a9bcd52f6f94383a6e3ff4 2023-07-18 07:15:26,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:26,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 23f1ee3193a9bcd52f6f94383a6e3ff4 2023-07-18 07:15:26,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove a243dd04fabff6acd959589651598faf 2023-07-18 07:15:26,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 23f1ee3193a9bcd52f6f94383a6e3ff4 2023-07-18 07:15:26,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:26,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a243dd04fabff6acd959589651598faf 2023-07-18 07:15:26,922 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a243dd04fabff6acd959589651598faf 2023-07-18 07:15:26,924 INFO [StoreOpener-23f1ee3193a9bcd52f6f94383a6e3ff4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 23f1ee3193a9bcd52f6f94383a6e3ff4 2023-07-18 07:15:26,924 INFO [StoreOpener-a243dd04fabff6acd959589651598faf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a243dd04fabff6acd959589651598faf 2023-07-18 07:15:26,925 DEBUG [StoreOpener-23f1ee3193a9bcd52f6f94383a6e3ff4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/23f1ee3193a9bcd52f6f94383a6e3ff4/f 2023-07-18 07:15:26,925 DEBUG [StoreOpener-23f1ee3193a9bcd52f6f94383a6e3ff4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/23f1ee3193a9bcd52f6f94383a6e3ff4/f 2023-07-18 07:15:26,925 DEBUG [StoreOpener-a243dd04fabff6acd959589651598faf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/a243dd04fabff6acd959589651598faf/f 2023-07-18 07:15:26,925 DEBUG [StoreOpener-a243dd04fabff6acd959589651598faf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/a243dd04fabff6acd959589651598faf/f 2023-07-18 07:15:26,926 INFO [StoreOpener-23f1ee3193a9bcd52f6f94383a6e3ff4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 23f1ee3193a9bcd52f6f94383a6e3ff4 columnFamilyName f 2023-07-18 07:15:26,926 INFO [StoreOpener-a243dd04fabff6acd959589651598faf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a243dd04fabff6acd959589651598faf columnFamilyName f 2023-07-18 07:15:26,926 INFO [StoreOpener-23f1ee3193a9bcd52f6f94383a6e3ff4-1] regionserver.HStore(310): Store=23f1ee3193a9bcd52f6f94383a6e3ff4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:26,926 INFO [StoreOpener-a243dd04fabff6acd959589651598faf-1] regionserver.HStore(310): Store=a243dd04fabff6acd959589651598faf/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:26,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/23f1ee3193a9bcd52f6f94383a6e3ff4 2023-07-18 07:15:26,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/a243dd04fabff6acd959589651598faf 2023-07-18 07:15:26,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/23f1ee3193a9bcd52f6f94383a6e3ff4 2023-07-18 07:15:26,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/a243dd04fabff6acd959589651598faf 2023-07-18 07:15:26,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 23f1ee3193a9bcd52f6f94383a6e3ff4 2023-07-18 07:15:26,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a243dd04fabff6acd959589651598faf 2023-07-18 07:15:26,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/a243dd04fabff6acd959589651598faf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:26,934 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a243dd04fabff6acd959589651598faf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11992611680, jitterRate=0.11689899861812592}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:26,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a243dd04fabff6acd959589651598faf: 2023-07-18 07:15:26,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/23f1ee3193a9bcd52f6f94383a6e3ff4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:26,935 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf., pid=138, masterSystemTime=1689664526918 2023-07-18 07:15:26,935 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 23f1ee3193a9bcd52f6f94383a6e3ff4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10611188000, jitterRate=-0.011756107211112976}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:26,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 23f1ee3193a9bcd52f6f94383a6e3ff4: 2023-07-18 07:15:26,937 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4., pid=139, masterSystemTime=1689664526918 2023-07-18 07:15:26,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf. 2023-07-18 07:15:26,937 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf. 2023-07-18 07:15:26,937 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8. 2023-07-18 07:15:26,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 58d8eff1e305bbd4ac3b1e3e6df9d5e8, NAME => 'Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-18 07:15:26,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 58d8eff1e305bbd4ac3b1e3e6df9d5e8 2023-07-18 07:15:26,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:26,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 58d8eff1e305bbd4ac3b1e3e6df9d5e8 2023-07-18 07:15:26,938 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 58d8eff1e305bbd4ac3b1e3e6df9d5e8 2023-07-18 07:15:26,939 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=a243dd04fabff6acd959589651598faf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:26,939 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689664526938"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664526938"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664526938"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664526938"}]},"ts":"1689664526938"} 2023-07-18 07:15:26,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4. 2023-07-18 07:15:26,939 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4. 2023-07-18 07:15:26,939 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d. 2023-07-18 07:15:26,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1b2b3fa270afb32dce71345ca7a0697d, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-18 07:15:26,939 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=23f1ee3193a9bcd52f6f94383a6e3ff4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:26,939 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689664526939"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664526939"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664526939"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664526939"}]},"ts":"1689664526939"} 2023-07-18 07:15:26,939 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 1b2b3fa270afb32dce71345ca7a0697d 2023-07-18 07:15:26,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:26,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1b2b3fa270afb32dce71345ca7a0697d 2023-07-18 07:15:26,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1b2b3fa270afb32dce71345ca7a0697d 2023-07-18 07:15:26,940 INFO [StoreOpener-58d8eff1e305bbd4ac3b1e3e6df9d5e8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 58d8eff1e305bbd4ac3b1e3e6df9d5e8 2023-07-18 07:15:26,942 INFO [StoreOpener-1b2b3fa270afb32dce71345ca7a0697d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1b2b3fa270afb32dce71345ca7a0697d 2023-07-18 07:15:26,942 DEBUG [StoreOpener-58d8eff1e305bbd4ac3b1e3e6df9d5e8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/58d8eff1e305bbd4ac3b1e3e6df9d5e8/f 2023-07-18 07:15:26,942 DEBUG [StoreOpener-58d8eff1e305bbd4ac3b1e3e6df9d5e8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/58d8eff1e305bbd4ac3b1e3e6df9d5e8/f 2023-07-18 07:15:26,943 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=133 2023-07-18 07:15:26,943 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=133, state=SUCCESS; OpenRegionProcedure a243dd04fabff6acd959589651598faf, server=jenkins-hbase4.apache.org,41293,1689664501013 in 174 msec 2023-07-18 07:15:26,943 INFO [StoreOpener-58d8eff1e305bbd4ac3b1e3e6df9d5e8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 58d8eff1e305bbd4ac3b1e3e6df9d5e8 columnFamilyName f 2023-07-18 07:15:26,943 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=134 2023-07-18 07:15:26,943 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=134, state=SUCCESS; OpenRegionProcedure 23f1ee3193a9bcd52f6f94383a6e3ff4, server=jenkins-hbase4.apache.org,42375,1689664504791 in 175 msec 2023-07-18 07:15:26,944 DEBUG [StoreOpener-1b2b3fa270afb32dce71345ca7a0697d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/1b2b3fa270afb32dce71345ca7a0697d/f 2023-07-18 07:15:26,944 DEBUG [StoreOpener-1b2b3fa270afb32dce71345ca7a0697d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/1b2b3fa270afb32dce71345ca7a0697d/f 2023-07-18 07:15:26,944 INFO [StoreOpener-58d8eff1e305bbd4ac3b1e3e6df9d5e8-1] regionserver.HStore(310): Store=58d8eff1e305bbd4ac3b1e3e6df9d5e8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:26,944 INFO [StoreOpener-1b2b3fa270afb32dce71345ca7a0697d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1b2b3fa270afb32dce71345ca7a0697d columnFamilyName f 2023-07-18 07:15:26,944 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a243dd04fabff6acd959589651598faf, ASSIGN in 336 msec 2023-07-18 07:15:26,945 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=23f1ee3193a9bcd52f6f94383a6e3ff4, ASSIGN in 337 msec 2023-07-18 07:15:26,945 INFO [StoreOpener-1b2b3fa270afb32dce71345ca7a0697d-1] regionserver.HStore(310): Store=1b2b3fa270afb32dce71345ca7a0697d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:26,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/58d8eff1e305bbd4ac3b1e3e6df9d5e8 2023-07-18 07:15:26,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/1b2b3fa270afb32dce71345ca7a0697d 2023-07-18 07:15:26,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/58d8eff1e305bbd4ac3b1e3e6df9d5e8 2023-07-18 07:15:26,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/1b2b3fa270afb32dce71345ca7a0697d 2023-07-18 07:15:26,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 58d8eff1e305bbd4ac3b1e3e6df9d5e8 2023-07-18 07:15:26,949 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1b2b3fa270afb32dce71345ca7a0697d 2023-07-18 07:15:26,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/1b2b3fa270afb32dce71345ca7a0697d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:26,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/58d8eff1e305bbd4ac3b1e3e6df9d5e8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:26,951 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 58d8eff1e305bbd4ac3b1e3e6df9d5e8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11769057600, jitterRate=0.09607890248298645}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:26,951 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1b2b3fa270afb32dce71345ca7a0697d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9747458400, jitterRate=-0.09219719469547272}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:26,951 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 58d8eff1e305bbd4ac3b1e3e6df9d5e8: 2023-07-18 07:15:26,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1b2b3fa270afb32dce71345ca7a0697d: 2023-07-18 07:15:26,952 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d., pid=141, masterSystemTime=1689664526918 2023-07-18 07:15:26,952 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8., pid=140, masterSystemTime=1689664526918 2023-07-18 07:15:26,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d. 2023-07-18 07:15:26,954 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d. 2023-07-18 07:15:26,954 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=1b2b3fa270afb32dce71345ca7a0697d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:26,954 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689664526954"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664526954"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664526954"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664526954"}]},"ts":"1689664526954"} 2023-07-18 07:15:26,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8. 2023-07-18 07:15:26,954 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8. 2023-07-18 07:15:26,955 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09. 2023-07-18 07:15:26,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 17ae3169ea39138d566218973ef1ad09, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-18 07:15:26,955 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=58d8eff1e305bbd4ac3b1e3e6df9d5e8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:26,955 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689664526955"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664526955"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664526955"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664526955"}]},"ts":"1689664526955"} 2023-07-18 07:15:26,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 17ae3169ea39138d566218973ef1ad09 2023-07-18 07:15:26,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:26,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 17ae3169ea39138d566218973ef1ad09 2023-07-18 07:15:26,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 17ae3169ea39138d566218973ef1ad09 2023-07-18 07:15:26,957 INFO [StoreOpener-17ae3169ea39138d566218973ef1ad09-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 17ae3169ea39138d566218973ef1ad09 2023-07-18 07:15:26,959 DEBUG [StoreOpener-17ae3169ea39138d566218973ef1ad09-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/17ae3169ea39138d566218973ef1ad09/f 2023-07-18 07:15:26,959 DEBUG [StoreOpener-17ae3169ea39138d566218973ef1ad09-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/17ae3169ea39138d566218973ef1ad09/f 2023-07-18 07:15:26,959 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=136 2023-07-18 07:15:26,959 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=136, state=SUCCESS; OpenRegionProcedure 1b2b3fa270afb32dce71345ca7a0697d, server=jenkins-hbase4.apache.org,42375,1689664504791 in 189 msec 2023-07-18 07:15:26,959 INFO [StoreOpener-17ae3169ea39138d566218973ef1ad09-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 17ae3169ea39138d566218973ef1ad09 columnFamilyName f 2023-07-18 07:15:26,959 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=137 2023-07-18 07:15:26,959 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=137, state=SUCCESS; OpenRegionProcedure 58d8eff1e305bbd4ac3b1e3e6df9d5e8, server=jenkins-hbase4.apache.org,41293,1689664501013 in 191 msec 2023-07-18 07:15:26,960 INFO [StoreOpener-17ae3169ea39138d566218973ef1ad09-1] regionserver.HStore(310): Store=17ae3169ea39138d566218973ef1ad09/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:26,960 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b2b3fa270afb32dce71345ca7a0697d, ASSIGN in 352 msec 2023-07-18 07:15:26,960 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=58d8eff1e305bbd4ac3b1e3e6df9d5e8, ASSIGN in 352 msec 2023-07-18 07:15:26,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/17ae3169ea39138d566218973ef1ad09 2023-07-18 07:15:26,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/17ae3169ea39138d566218973ef1ad09 2023-07-18 07:15:26,963 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 17ae3169ea39138d566218973ef1ad09 2023-07-18 07:15:26,965 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/17ae3169ea39138d566218973ef1ad09/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:26,966 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 17ae3169ea39138d566218973ef1ad09; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9553595360, jitterRate=-0.11025209724903107}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:26,966 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 17ae3169ea39138d566218973ef1ad09: 2023-07-18 07:15:26,966 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09., pid=142, masterSystemTime=1689664526918 2023-07-18 07:15:26,967 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09. 2023-07-18 07:15:26,968 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09. 2023-07-18 07:15:26,968 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=17ae3169ea39138d566218973ef1ad09, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:26,968 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689664526968"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664526968"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664526968"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664526968"}]},"ts":"1689664526968"} 2023-07-18 07:15:26,970 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=135 2023-07-18 07:15:26,970 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=135, state=SUCCESS; OpenRegionProcedure 17ae3169ea39138d566218973ef1ad09, server=jenkins-hbase4.apache.org,41293,1689664501013 in 200 msec 2023-07-18 07:15:26,972 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=132 2023-07-18 07:15:26,972 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=17ae3169ea39138d566218973ef1ad09, ASSIGN in 363 msec 2023-07-18 07:15:26,972 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 07:15:26,973 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664526972"}]},"ts":"1689664526972"} 2023-07-18 07:15:26,974 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-18 07:15:26,976 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 07:15:26,977 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 863 msec 2023-07-18 07:15:27,181 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testDisabledTableMove' 2023-07-18 07:15:27,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-18 07:15:27,223 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-18 07:15:27,224 DEBUG [Listener at localhost/33473] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-18 07:15:27,224 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:27,227 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-18 07:15:27,227 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:27,228 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-18 07:15:27,228 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:27,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-18 07:15:27,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:27,235 INFO [Listener at localhost/33473] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-18 07:15:27,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-18 07:15:27,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-18 07:15:27,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-18 07:15:27,239 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664527239"}]},"ts":"1689664527239"} 2023-07-18 07:15:27,240 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-18 07:15:27,242 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-18 07:15:27,243 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a243dd04fabff6acd959589651598faf, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=23f1ee3193a9bcd52f6f94383a6e3ff4, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=17ae3169ea39138d566218973ef1ad09, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b2b3fa270afb32dce71345ca7a0697d, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=58d8eff1e305bbd4ac3b1e3e6df9d5e8, UNASSIGN}] 2023-07-18 07:15:27,244 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=17ae3169ea39138d566218973ef1ad09, UNASSIGN 2023-07-18 07:15:27,245 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a243dd04fabff6acd959589651598faf, UNASSIGN 2023-07-18 07:15:27,245 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=23f1ee3193a9bcd52f6f94383a6e3ff4, UNASSIGN 2023-07-18 07:15:27,245 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b2b3fa270afb32dce71345ca7a0697d, UNASSIGN 2023-07-18 07:15:27,245 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=58d8eff1e305bbd4ac3b1e3e6df9d5e8, UNASSIGN 2023-07-18 07:15:27,245 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=17ae3169ea39138d566218973ef1ad09, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:27,245 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689664527245"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664527245"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664527245"}]},"ts":"1689664527245"} 2023-07-18 07:15:27,245 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=a243dd04fabff6acd959589651598faf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:27,245 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=23f1ee3193a9bcd52f6f94383a6e3ff4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:27,246 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=58d8eff1e305bbd4ac3b1e3e6df9d5e8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:27,246 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689664527245"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664527245"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664527245"}]},"ts":"1689664527245"} 2023-07-18 07:15:27,246 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689664527245"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664527245"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664527245"}]},"ts":"1689664527245"} 2023-07-18 07:15:27,246 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=1b2b3fa270afb32dce71345ca7a0697d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:27,246 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689664527246"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664527246"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664527246"}]},"ts":"1689664527246"} 2023-07-18 07:15:27,246 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689664527246"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664527246"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664527246"}]},"ts":"1689664527246"} 2023-07-18 07:15:27,247 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=146, state=RUNNABLE; CloseRegionProcedure 17ae3169ea39138d566218973ef1ad09, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:27,247 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=145, state=RUNNABLE; CloseRegionProcedure 23f1ee3193a9bcd52f6f94383a6e3ff4, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:27,248 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=144, state=RUNNABLE; CloseRegionProcedure a243dd04fabff6acd959589651598faf, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:27,248 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=148, state=RUNNABLE; CloseRegionProcedure 58d8eff1e305bbd4ac3b1e3e6df9d5e8, server=jenkins-hbase4.apache.org,41293,1689664501013}] 2023-07-18 07:15:27,249 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=147, state=RUNNABLE; CloseRegionProcedure 1b2b3fa270afb32dce71345ca7a0697d, server=jenkins-hbase4.apache.org,42375,1689664504791}] 2023-07-18 07:15:27,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-18 07:15:27,398 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a243dd04fabff6acd959589651598faf 2023-07-18 07:15:27,398 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 23f1ee3193a9bcd52f6f94383a6e3ff4 2023-07-18 07:15:27,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a243dd04fabff6acd959589651598faf, disabling compactions & flushes 2023-07-18 07:15:27,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 23f1ee3193a9bcd52f6f94383a6e3ff4, disabling compactions & flushes 2023-07-18 07:15:27,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf. 2023-07-18 07:15:27,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4. 2023-07-18 07:15:27,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4. 2023-07-18 07:15:27,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf. 2023-07-18 07:15:27,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4. after waiting 0 ms 2023-07-18 07:15:27,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4. 2023-07-18 07:15:27,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf. after waiting 0 ms 2023-07-18 07:15:27,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf. 2023-07-18 07:15:27,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/a243dd04fabff6acd959589651598faf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:27,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/23f1ee3193a9bcd52f6f94383a6e3ff4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:27,405 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf. 2023-07-18 07:15:27,405 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4. 2023-07-18 07:15:27,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a243dd04fabff6acd959589651598faf: 2023-07-18 07:15:27,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 23f1ee3193a9bcd52f6f94383a6e3ff4: 2023-07-18 07:15:27,407 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 23f1ee3193a9bcd52f6f94383a6e3ff4 2023-07-18 07:15:27,407 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1b2b3fa270afb32dce71345ca7a0697d 2023-07-18 07:15:27,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1b2b3fa270afb32dce71345ca7a0697d, disabling compactions & flushes 2023-07-18 07:15:27,408 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d. 2023-07-18 07:15:27,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d. 2023-07-18 07:15:27,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d. after waiting 0 ms 2023-07-18 07:15:27,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d. 2023-07-18 07:15:27,408 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=23f1ee3193a9bcd52f6f94383a6e3ff4, regionState=CLOSED 2023-07-18 07:15:27,408 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689664527408"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664527408"}]},"ts":"1689664527408"} 2023-07-18 07:15:27,409 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a243dd04fabff6acd959589651598faf 2023-07-18 07:15:27,409 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 17ae3169ea39138d566218973ef1ad09 2023-07-18 07:15:27,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 17ae3169ea39138d566218973ef1ad09, disabling compactions & flushes 2023-07-18 07:15:27,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09. 2023-07-18 07:15:27,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09. 2023-07-18 07:15:27,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09. after waiting 0 ms 2023-07-18 07:15:27,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09. 2023-07-18 07:15:27,411 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=a243dd04fabff6acd959589651598faf, regionState=CLOSED 2023-07-18 07:15:27,411 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689664527411"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664527411"}]},"ts":"1689664527411"} 2023-07-18 07:15:27,412 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=145 2023-07-18 07:15:27,412 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=145, state=SUCCESS; CloseRegionProcedure 23f1ee3193a9bcd52f6f94383a6e3ff4, server=jenkins-hbase4.apache.org,42375,1689664504791 in 164 msec 2023-07-18 07:15:27,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/1b2b3fa270afb32dce71345ca7a0697d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:27,414 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=23f1ee3193a9bcd52f6f94383a6e3ff4, UNASSIGN in 169 msec 2023-07-18 07:15:27,414 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d. 2023-07-18 07:15:27,414 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=144 2023-07-18 07:15:27,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1b2b3fa270afb32dce71345ca7a0697d: 2023-07-18 07:15:27,414 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=144, state=SUCCESS; CloseRegionProcedure a243dd04fabff6acd959589651598faf, server=jenkins-hbase4.apache.org,41293,1689664501013 in 164 msec 2023-07-18 07:15:27,415 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=a243dd04fabff6acd959589651598faf, UNASSIGN in 171 msec 2023-07-18 07:15:27,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1b2b3fa270afb32dce71345ca7a0697d 2023-07-18 07:15:27,416 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=1b2b3fa270afb32dce71345ca7a0697d, regionState=CLOSED 2023-07-18 07:15:27,416 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689664527416"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664527416"}]},"ts":"1689664527416"} 2023-07-18 07:15:27,418 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/17ae3169ea39138d566218973ef1ad09/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:27,419 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09. 2023-07-18 07:15:27,419 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 17ae3169ea39138d566218973ef1ad09: 2023-07-18 07:15:27,423 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 17ae3169ea39138d566218973ef1ad09 2023-07-18 07:15:27,423 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 58d8eff1e305bbd4ac3b1e3e6df9d5e8 2023-07-18 07:15:27,424 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 58d8eff1e305bbd4ac3b1e3e6df9d5e8, disabling compactions & flushes 2023-07-18 07:15:27,424 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8. 2023-07-18 07:15:27,424 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8. 2023-07-18 07:15:27,424 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8. after waiting 0 ms 2023-07-18 07:15:27,424 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8. 2023-07-18 07:15:27,429 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=17ae3169ea39138d566218973ef1ad09, regionState=CLOSED 2023-07-18 07:15:27,430 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689664527429"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664527429"}]},"ts":"1689664527429"} 2023-07-18 07:15:27,430 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=147 2023-07-18 07:15:27,430 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=147, state=SUCCESS; CloseRegionProcedure 1b2b3fa270afb32dce71345ca7a0697d, server=jenkins-hbase4.apache.org,42375,1689664504791 in 169 msec 2023-07-18 07:15:27,432 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1b2b3fa270afb32dce71345ca7a0697d, UNASSIGN in 187 msec 2023-07-18 07:15:27,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/Group_testDisabledTableMove/58d8eff1e305bbd4ac3b1e3e6df9d5e8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:27,434 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8. 2023-07-18 07:15:27,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 58d8eff1e305bbd4ac3b1e3e6df9d5e8: 2023-07-18 07:15:27,438 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 58d8eff1e305bbd4ac3b1e3e6df9d5e8 2023-07-18 07:15:27,438 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=146 2023-07-18 07:15:27,438 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=58d8eff1e305bbd4ac3b1e3e6df9d5e8, regionState=CLOSED 2023-07-18 07:15:27,438 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=146, state=SUCCESS; CloseRegionProcedure 17ae3169ea39138d566218973ef1ad09, server=jenkins-hbase4.apache.org,41293,1689664501013 in 184 msec 2023-07-18 07:15:27,438 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689664527438"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664527438"}]},"ts":"1689664527438"} 2023-07-18 07:15:27,439 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=17ae3169ea39138d566218973ef1ad09, UNASSIGN in 195 msec 2023-07-18 07:15:27,440 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=148 2023-07-18 07:15:27,441 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=148, state=SUCCESS; CloseRegionProcedure 58d8eff1e305bbd4ac3b1e3e6df9d5e8, server=jenkins-hbase4.apache.org,41293,1689664501013 in 191 msec 2023-07-18 07:15:27,442 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=143 2023-07-18 07:15:27,442 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=58d8eff1e305bbd4ac3b1e3e6df9d5e8, UNASSIGN in 198 msec 2023-07-18 07:15:27,442 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664527442"}]},"ts":"1689664527442"} 2023-07-18 07:15:27,443 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-18 07:15:27,446 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-18 07:15:27,448 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 211 msec 2023-07-18 07:15:27,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-18 07:15:27,542 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-18 07:15:27,542 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_2080490170 2023-07-18 07:15:27,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_2080490170 2023-07-18 07:15:27,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:27,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2080490170 2023-07-18 07:15:27,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:27,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:27,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-18 07:15:27,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_2080490170, current retry=0 2023-07-18 07:15:27,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_2080490170. 2023-07-18 07:15:27,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:27,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:27,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:27,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-18 07:15:27,553 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:27,555 INFO [Listener at localhost/33473] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-18 07:15:27,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-18 07:15:27,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:27,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 923 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:52448 deadline: 1689664587555, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-18 07:15:27,556 DEBUG [Listener at localhost/33473] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-18 07:15:27,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-18 07:15:27,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 07:15:27,559 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 07:15:27,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_2080490170' 2023-07-18 07:15:27,560 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 07:15:27,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:27,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2080490170 2023-07-18 07:15:27,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:27,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:27,567 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/a243dd04fabff6acd959589651598faf 2023-07-18 07:15:27,567 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/58d8eff1e305bbd4ac3b1e3e6df9d5e8 2023-07-18 07:15:27,567 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/1b2b3fa270afb32dce71345ca7a0697d 2023-07-18 07:15:27,567 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/17ae3169ea39138d566218973ef1ad09 2023-07-18 07:15:27,567 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/23f1ee3193a9bcd52f6f94383a6e3ff4 2023-07-18 07:15:27,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-18 07:15:27,569 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/23f1ee3193a9bcd52f6f94383a6e3ff4/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/23f1ee3193a9bcd52f6f94383a6e3ff4/recovered.edits] 2023-07-18 07:15:27,569 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/17ae3169ea39138d566218973ef1ad09/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/17ae3169ea39138d566218973ef1ad09/recovered.edits] 2023-07-18 07:15:27,569 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/a243dd04fabff6acd959589651598faf/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/a243dd04fabff6acd959589651598faf/recovered.edits] 2023-07-18 07:15:27,569 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/1b2b3fa270afb32dce71345ca7a0697d/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/1b2b3fa270afb32dce71345ca7a0697d/recovered.edits] 2023-07-18 07:15:27,569 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/58d8eff1e305bbd4ac3b1e3e6df9d5e8/f, FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/58d8eff1e305bbd4ac3b1e3e6df9d5e8/recovered.edits] 2023-07-18 07:15:27,578 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/58d8eff1e305bbd4ac3b1e3e6df9d5e8/recovered.edits/4.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/Group_testDisabledTableMove/58d8eff1e305bbd4ac3b1e3e6df9d5e8/recovered.edits/4.seqid 2023-07-18 07:15:27,578 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/a243dd04fabff6acd959589651598faf/recovered.edits/4.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/Group_testDisabledTableMove/a243dd04fabff6acd959589651598faf/recovered.edits/4.seqid 2023-07-18 07:15:27,578 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/23f1ee3193a9bcd52f6f94383a6e3ff4/recovered.edits/4.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/Group_testDisabledTableMove/23f1ee3193a9bcd52f6f94383a6e3ff4/recovered.edits/4.seqid 2023-07-18 07:15:27,578 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/1b2b3fa270afb32dce71345ca7a0697d/recovered.edits/4.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/Group_testDisabledTableMove/1b2b3fa270afb32dce71345ca7a0697d/recovered.edits/4.seqid 2023-07-18 07:15:27,578 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/17ae3169ea39138d566218973ef1ad09/recovered.edits/4.seqid to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/archive/data/default/Group_testDisabledTableMove/17ae3169ea39138d566218973ef1ad09/recovered.edits/4.seqid 2023-07-18 07:15:27,579 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/58d8eff1e305bbd4ac3b1e3e6df9d5e8 2023-07-18 07:15:27,579 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/a243dd04fabff6acd959589651598faf 2023-07-18 07:15:27,579 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/23f1ee3193a9bcd52f6f94383a6e3ff4 2023-07-18 07:15:27,579 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/1b2b3fa270afb32dce71345ca7a0697d 2023-07-18 07:15:27,579 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/.tmp/data/default/Group_testDisabledTableMove/17ae3169ea39138d566218973ef1ad09 2023-07-18 07:15:27,580 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-18 07:15:27,582 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 07:15:27,584 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-18 07:15:27,589 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-18 07:15:27,591 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 07:15:27,591 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-18 07:15:27,591 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664527591"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:27,591 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664527591"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:27,591 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664527591"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:27,591 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664527591"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:27,591 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664527591"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:27,593 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-18 07:15:27,593 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => a243dd04fabff6acd959589651598faf, NAME => 'Group_testDisabledTableMove,,1689664526113.a243dd04fabff6acd959589651598faf.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 23f1ee3193a9bcd52f6f94383a6e3ff4, NAME => 'Group_testDisabledTableMove,aaaaa,1689664526113.23f1ee3193a9bcd52f6f94383a6e3ff4.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 17ae3169ea39138d566218973ef1ad09, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689664526113.17ae3169ea39138d566218973ef1ad09.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 1b2b3fa270afb32dce71345ca7a0697d, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689664526113.1b2b3fa270afb32dce71345ca7a0697d.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 58d8eff1e305bbd4ac3b1e3e6df9d5e8, NAME => 'Group_testDisabledTableMove,zzzzz,1689664526113.58d8eff1e305bbd4ac3b1e3e6df9d5e8.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-18 07:15:27,593 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-18 07:15:27,593 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689664527593"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:27,595 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-18 07:15:27,598 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-18 07:15:27,599 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 41 msec 2023-07-18 07:15:27,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-18 07:15:27,670 INFO [Listener at localhost/33473] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-18 07:15:27,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:27,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:27,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:27,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:27,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:27,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:33769] to rsgroup default 2023-07-18 07:15:27,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:27,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_2080490170 2023-07-18 07:15:27,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:27,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:27,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_2080490170, current retry=0 2023-07-18 07:15:27,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33769,1689664501155, jenkins-hbase4.apache.org,39465,1689664501221] are moved back to Group_testDisabledTableMove_2080490170 2023-07-18 07:15:27,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_2080490170 => default 2023-07-18 07:15:27,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:27,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_2080490170 2023-07-18 07:15:27,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:27,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:27,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 07:15:27,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:27,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:27,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:27,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:27,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:27,687 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:27,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:27,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:27,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:27,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:27,694 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:27,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:27,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:27,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:27,699 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:27,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:27,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:27,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:27,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:27,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:27,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 957 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52448 deadline: 1689665727704, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:27,705 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:27,707 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:27,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:27,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:27,707 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:27,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:27,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:27,731 INFO [Listener at localhost/33473] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=514 (was 511) Potentially hanging thread: hconnection-0x320932da-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1322797093_17 at /127.0.0.1:36390 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xd65c55c-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-69851277_17 at /127.0.0.1:54744 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=806 (was 771) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=416 (was 451), ProcessCount=174 (was 174), AvailableMemoryMB=2410 (was 2425) 2023-07-18 07:15:27,731 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-18 07:15:27,752 INFO [Listener at localhost/33473] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=514, OpenFileDescriptor=806, MaxFileDescriptor=60000, SystemLoadAverage=416, ProcessCount=174, AvailableMemoryMB=2410 2023-07-18 07:15:27,752 WARN [Listener at localhost/33473] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-18 07:15:27,752 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-18 07:15:27,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:27,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:27,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:27,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:27,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:27,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:27,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:27,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:27,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:27,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:27,767 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:27,770 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:27,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:27,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:27,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:27,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:27,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:27,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:27,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:27,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33141] to rsgroup master 2023-07-18 07:15:27,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:27,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] ipc.CallRunner(144): callId: 985 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52448 deadline: 1689665727789, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. 2023-07-18 07:15:27,790 WARN [Listener at localhost/33473] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:33141 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:27,792 INFO [Listener at localhost/33473] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:27,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:27,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:27,793 INFO [Listener at localhost/33473] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33769, jenkins-hbase4.apache.org:39465, jenkins-hbase4.apache.org:41293, jenkins-hbase4.apache.org:42375], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:27,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:27,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33141] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:27,793 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 07:15:27,793 INFO [Listener at localhost/33473] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 07:15:27,794 DEBUG [Listener at localhost/33473] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x481b1111 to 127.0.0.1:57245 2023-07-18 07:15:27,794 DEBUG [Listener at localhost/33473] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:27,795 DEBUG [Listener at localhost/33473] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 07:15:27,795 DEBUG [Listener at localhost/33473] util.JVMClusterUtil(257): Found active master hash=277234986, stopped=false 2023-07-18 07:15:27,796 DEBUG [Listener at localhost/33473] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 07:15:27,796 DEBUG [Listener at localhost/33473] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 07:15:27,796 INFO [Listener at localhost/33473] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,33141,1689664498534 2023-07-18 07:15:27,797 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:27,797 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:27,797 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:27,797 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:27,797 INFO [Listener at localhost/33473] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 07:15:27,797 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:27,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:27,797 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:27,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:27,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:27,798 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:27,798 INFO [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(1064): Closing user regions 2023-07-18 07:15:27,798 INFO [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(3305): Received CLOSE for 428bd5fcdb04976e830cf8a9b852f2cd 2023-07-18 07:15:27,798 DEBUG [Listener at localhost/33473] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3e02c867 to 127.0.0.1:57245 2023-07-18 07:15:27,799 DEBUG [Listener at localhost/33473] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:27,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 428bd5fcdb04976e830cf8a9b852f2cd, disabling compactions & flushes 2023-07-18 07:15:27,799 INFO [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(3305): Received CLOSE for a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:27,799 INFO [Listener at localhost/33473] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41293,1689664501013' ***** 2023-07-18 07:15:27,799 INFO [Listener at localhost/33473] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 07:15:27,799 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:27,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:27,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. after waiting 0 ms 2023-07-18 07:15:27,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:27,799 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 428bd5fcdb04976e830cf8a9b852f2cd 1/1 column families, dataSize=22.08 KB heapSize=36.54 KB 2023-07-18 07:15:27,799 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:27,799 INFO [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(3305): Received CLOSE for 41aebbe53986314d2b2440254cc81255 2023-07-18 07:15:27,800 INFO [Listener at localhost/33473] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33769,1689664501155' ***** 2023-07-18 07:15:27,800 INFO [Listener at localhost/33473] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 07:15:27,800 INFO [Listener at localhost/33473] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39465,1689664501221' ***** 2023-07-18 07:15:27,800 INFO [Listener at localhost/33473] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 07:15:27,800 INFO [Listener at localhost/33473] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42375,1689664504791' ***** 2023-07-18 07:15:27,800 INFO [Listener at localhost/33473] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 07:15:27,800 INFO [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 07:15:27,800 INFO [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 07:15:27,800 INFO [RS:2;jenkins-hbase4:39465] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 07:15:27,800 INFO [RS:1;jenkins-hbase4:33769] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 07:15:27,808 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:27,808 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:27,808 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:27,813 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 07:15:27,813 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 07:15:27,816 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 07:15:27,816 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:27,816 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 07:15:27,817 INFO [RS:3;jenkins-hbase4:42375] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@22f99e4c{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:27,817 INFO [RS:1;jenkins-hbase4:33769] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@655a6898{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:27,817 INFO [RS:0;jenkins-hbase4:41293] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7c205c50{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:27,817 INFO [RS:2;jenkins-hbase4:39465] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6efb77a4{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:27,823 INFO [RS:3;jenkins-hbase4:42375] server.AbstractConnector(383): Stopped ServerConnector@251bffae{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:27,824 INFO [RS:1;jenkins-hbase4:33769] server.AbstractConnector(383): Stopped ServerConnector@8c593f2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:27,824 INFO [RS:3;jenkins-hbase4:42375] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 07:15:27,823 INFO [RS:2;jenkins-hbase4:39465] server.AbstractConnector(383): Stopped ServerConnector@1f112788{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:27,824 INFO [RS:1;jenkins-hbase4:33769] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 07:15:27,825 INFO [RS:3;jenkins-hbase4:42375] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2ad20967{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 07:15:27,825 INFO [RS:0;jenkins-hbase4:41293] server.AbstractConnector(383): Stopped ServerConnector@adfe6ba{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:27,825 INFO [RS:0;jenkins-hbase4:41293] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 07:15:27,825 INFO [RS:2;jenkins-hbase4:39465] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 07:15:27,830 INFO [RS:3;jenkins-hbase4:42375] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@264de25c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/hadoop.log.dir/,STOPPED} 2023-07-18 07:15:27,833 INFO [RS:3;jenkins-hbase4:42375] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 07:15:27,833 INFO [RS:3;jenkins-hbase4:42375] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 07:15:27,833 INFO [RS:3;jenkins-hbase4:42375] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 07:15:27,833 INFO [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(3307): Received CLOSE for the region: a9acafa34199e073a03f352ac1189b9f, which we are already trying to CLOSE, but not completed yet 2023-07-18 07:15:27,834 INFO [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(3307): Received CLOSE for the region: 41aebbe53986314d2b2440254cc81255, which we are already trying to CLOSE, but not completed yet 2023-07-18 07:15:27,834 INFO [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:27,834 DEBUG [RS:3;jenkins-hbase4:42375] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x39cbd810 to 127.0.0.1:57245 2023-07-18 07:15:27,834 DEBUG [RS:3;jenkins-hbase4:42375] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:27,834 INFO [RS:3;jenkins-hbase4:42375] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 07:15:27,834 INFO [RS:3;jenkins-hbase4:42375] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 07:15:27,834 INFO [RS:3;jenkins-hbase4:42375] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 07:15:27,834 INFO [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 07:15:27,834 INFO [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-18 07:15:27,835 DEBUG [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(1478): Online Regions={428bd5fcdb04976e830cf8a9b852f2cd=hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd., 1588230740=hbase:meta,,1.1588230740, a9acafa34199e073a03f352ac1189b9f=unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f., 41aebbe53986314d2b2440254cc81255=hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255.} 2023-07-18 07:15:27,835 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 07:15:27,835 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 07:15:27,835 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 07:15:27,835 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 07:15:27,835 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 07:15:27,835 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=37.46 KB heapSize=61.09 KB 2023-07-18 07:15:27,835 DEBUG [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(1504): Waiting on 1588230740, 41aebbe53986314d2b2440254cc81255, 428bd5fcdb04976e830cf8a9b852f2cd, a9acafa34199e073a03f352ac1189b9f 2023-07-18 07:15:27,837 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.08 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd/.tmp/m/ab5e77e1350149b1bed3284a3e45bfff 2023-07-18 07:15:27,841 INFO [RS:2;jenkins-hbase4:39465] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@40731eac{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 07:15:27,841 INFO [RS:1;jenkins-hbase4:33769] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1dbceb61{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 07:15:27,842 INFO [RS:2;jenkins-hbase4:39465] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@54df227c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/hadoop.log.dir/,STOPPED} 2023-07-18 07:15:27,843 INFO [RS:1;jenkins-hbase4:33769] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6462cf1b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/hadoop.log.dir/,STOPPED} 2023-07-18 07:15:27,841 INFO [RS:0;jenkins-hbase4:41293] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d349d91{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 07:15:27,844 INFO [RS:0;jenkins-hbase4:41293] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1d1f391{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/hadoop.log.dir/,STOPPED} 2023-07-18 07:15:27,847 INFO [RS:2;jenkins-hbase4:39465] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 07:15:27,847 INFO [RS:2;jenkins-hbase4:39465] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 07:15:27,847 INFO [RS:2;jenkins-hbase4:39465] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 07:15:27,847 INFO [RS:2;jenkins-hbase4:39465] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:27,847 DEBUG [RS:2;jenkins-hbase4:39465] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4096f316 to 127.0.0.1:57245 2023-07-18 07:15:27,847 DEBUG [RS:2;jenkins-hbase4:39465] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:27,847 INFO [RS:2;jenkins-hbase4:39465] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39465,1689664501221; all regions closed. 2023-07-18 07:15:27,847 INFO [RS:1;jenkins-hbase4:33769] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 07:15:27,847 INFO [RS:1;jenkins-hbase4:33769] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 07:15:27,848 INFO [RS:1;jenkins-hbase4:33769] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 07:15:27,848 INFO [RS:1;jenkins-hbase4:33769] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:27,848 DEBUG [RS:1;jenkins-hbase4:33769] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6a1d1a7c to 127.0.0.1:57245 2023-07-18 07:15:27,848 DEBUG [RS:1;jenkins-hbase4:33769] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:27,848 INFO [RS:1;jenkins-hbase4:33769] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33769,1689664501155; all regions closed. 2023-07-18 07:15:27,855 INFO [RS:0;jenkins-hbase4:41293] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 07:15:27,855 INFO [RS:0;jenkins-hbase4:41293] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 07:15:27,855 INFO [RS:0;jenkins-hbase4:41293] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 07:15:27,855 INFO [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer(3305): Received CLOSE for 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:27,855 INFO [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:27,856 DEBUG [RS:0;jenkins-hbase4:41293] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6fc4ff21 to 127.0.0.1:57245 2023-07-18 07:15:27,856 DEBUG [RS:0;jenkins-hbase4:41293] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:27,856 INFO [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 07:15:27,856 DEBUG [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer(1478): Online Regions={7a6d703275ea82872b35fb13c8326bf5=testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5.} 2023-07-18 07:15:27,856 DEBUG [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer(1504): Waiting on 7a6d703275ea82872b35fb13c8326bf5 2023-07-18 07:15:27,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7a6d703275ea82872b35fb13c8326bf5, disabling compactions & flushes 2023-07-18 07:15:27,858 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:27,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:27,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. after waiting 0 ms 2023-07-18 07:15:27,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:27,865 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ab5e77e1350149b1bed3284a3e45bfff 2023-07-18 07:15:27,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd/.tmp/m/ab5e77e1350149b1bed3284a3e45bfff as hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd/m/ab5e77e1350149b1bed3284a3e45bfff 2023-07-18 07:15:27,875 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ab5e77e1350149b1bed3284a3e45bfff 2023-07-18 07:15:27,876 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd/m/ab5e77e1350149b1bed3284a3e45bfff, entries=22, sequenceid=101, filesize=5.9 K 2023-07-18 07:15:27,877 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.08 KB/22614, heapSize ~36.52 KB/37400, currentSize=0 B/0 for 428bd5fcdb04976e830cf8a9b852f2cd in 78ms, sequenceid=101, compaction requested=false 2023-07-18 07:15:27,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/testRename/7a6d703275ea82872b35fb13c8326bf5/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 07:15:27,890 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:27,890 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7a6d703275ea82872b35fb13c8326bf5: 2023-07-18 07:15:27,890 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689664520468.7a6d703275ea82872b35fb13c8326bf5. 2023-07-18 07:15:27,890 DEBUG [RS:1;jenkins-hbase4:33769] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/oldWALs 2023-07-18 07:15:27,890 DEBUG [RS:2;jenkins-hbase4:39465] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/oldWALs 2023-07-18 07:15:27,890 INFO [RS:1;jenkins-hbase4:33769] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33769%2C1689664501155:(num 1689664503252) 2023-07-18 07:15:27,896 DEBUG [RS:1;jenkins-hbase4:33769] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:27,896 INFO [RS:1;jenkins-hbase4:33769] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:27,895 INFO [RS:2;jenkins-hbase4:39465] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39465%2C1689664501221:(num 1689664503252) 2023-07-18 07:15:27,896 DEBUG [RS:2;jenkins-hbase4:39465] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:27,896 INFO [RS:2;jenkins-hbase4:39465] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:27,897 INFO [RS:1;jenkins-hbase4:33769] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 07:15:27,897 INFO [RS:2;jenkins-hbase4:39465] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 07:15:27,897 INFO [RS:1;jenkins-hbase4:33769] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 07:15:27,898 INFO [RS:1;jenkins-hbase4:33769] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 07:15:27,898 INFO [RS:1;jenkins-hbase4:33769] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 07:15:27,898 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 07:15:27,898 INFO [RS:2;jenkins-hbase4:39465] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 07:15:27,899 INFO [RS:2;jenkins-hbase4:39465] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 07:15:27,899 INFO [RS:2;jenkins-hbase4:39465] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 07:15:27,898 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 07:15:27,900 INFO [RS:2;jenkins-hbase4:39465] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39465 2023-07-18 07:15:27,900 INFO [RS:1;jenkins-hbase4:33769] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33769 2023-07-18 07:15:27,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/rsgroup/428bd5fcdb04976e830cf8a9b852f2cd/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=29 2023-07-18 07:15:27,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 07:15:27,905 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:27,905 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 428bd5fcdb04976e830cf8a9b852f2cd: 2023-07-18 07:15:27,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689664503883.428bd5fcdb04976e830cf8a9b852f2cd. 2023-07-18 07:15:27,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a9acafa34199e073a03f352ac1189b9f, disabling compactions & flushes 2023-07-18 07:15:27,906 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=34.54 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/.tmp/info/0d870ee191684c7ca681ad22b341930b 2023-07-18 07:15:27,906 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:27,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:27,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. after waiting 0 ms 2023-07-18 07:15:27,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:27,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/default/unmovedTable/a9acafa34199e073a03f352ac1189b9f/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-18 07:15:27,915 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0d870ee191684c7ca681ad22b341930b 2023-07-18 07:15:27,915 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:27,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a9acafa34199e073a03f352ac1189b9f: 2023-07-18 07:15:27,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689664522129.a9acafa34199e073a03f352ac1189b9f. 2023-07-18 07:15:27,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 41aebbe53986314d2b2440254cc81255, disabling compactions & flushes 2023-07-18 07:15:27,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:27,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:27,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. after waiting 0 ms 2023-07-18 07:15:27,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:27,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/namespace/41aebbe53986314d2b2440254cc81255/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-18 07:15:27,929 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:27,929 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 41aebbe53986314d2b2440254cc81255: 2023-07-18 07:15:27,929 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689664503743.41aebbe53986314d2b2440254cc81255. 2023-07-18 07:15:27,948 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:27,948 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:27,948 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:27,948 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:27,948 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:27,948 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:27,948 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:27,948 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33769,1689664501155 2023-07-18 07:15:27,948 ERROR [Listener at localhost/33473-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@2cdcb8ae rejected from java.util.concurrent.ThreadPoolExecutor@28e339ed[Shutting down, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-07-18 07:15:27,949 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:27,949 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:27,949 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:27,949 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:27,949 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39465,1689664501221 2023-07-18 07:15:27,952 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/.tmp/rep_barrier/7b8a178f66c244ff86c84c8cec3efae0 2023-07-18 07:15:27,958 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7b8a178f66c244ff86c84c8cec3efae0 2023-07-18 07:15:27,970 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/.tmp/table/26243a3e2ae0443d83740dbefc617260 2023-07-18 07:15:27,975 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 26243a3e2ae0443d83740dbefc617260 2023-07-18 07:15:27,976 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/.tmp/info/0d870ee191684c7ca681ad22b341930b as hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/info/0d870ee191684c7ca681ad22b341930b 2023-07-18 07:15:27,982 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0d870ee191684c7ca681ad22b341930b 2023-07-18 07:15:27,983 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/info/0d870ee191684c7ca681ad22b341930b, entries=62, sequenceid=210, filesize=11.8 K 2023-07-18 07:15:28,007 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/.tmp/rep_barrier/7b8a178f66c244ff86c84c8cec3efae0 as hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/rep_barrier/7b8a178f66c244ff86c84c8cec3efae0 2023-07-18 07:15:28,020 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7b8a178f66c244ff86c84c8cec3efae0 2023-07-18 07:15:28,020 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/rep_barrier/7b8a178f66c244ff86c84c8cec3efae0, entries=8, sequenceid=210, filesize=5.8 K 2023-07-18 07:15:28,021 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/.tmp/table/26243a3e2ae0443d83740dbefc617260 as hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/table/26243a3e2ae0443d83740dbefc617260 2023-07-18 07:15:28,029 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 26243a3e2ae0443d83740dbefc617260 2023-07-18 07:15:28,029 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/table/26243a3e2ae0443d83740dbefc617260, entries=16, sequenceid=210, filesize=6.0 K 2023-07-18 07:15:28,030 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~37.46 KB/38356, heapSize ~61.05 KB/62512, currentSize=0 B/0 for 1588230740 in 195ms, sequenceid=210, compaction requested=false 2023-07-18 07:15:28,036 DEBUG [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-18 07:15:28,043 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=98 2023-07-18 07:15:28,043 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 07:15:28,044 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 07:15:28,044 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 07:15:28,044 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 07:15:28,050 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39465,1689664501221] 2023-07-18 07:15:28,050 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39465,1689664501221; numProcessing=1 2023-07-18 07:15:28,051 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39465,1689664501221 already deleted, retry=false 2023-07-18 07:15:28,051 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39465,1689664501221 expired; onlineServers=3 2023-07-18 07:15:28,051 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33769,1689664501155] 2023-07-18 07:15:28,051 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33769,1689664501155; numProcessing=2 2023-07-18 07:15:28,054 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33769,1689664501155 already deleted, retry=false 2023-07-18 07:15:28,054 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33769,1689664501155 expired; onlineServers=2 2023-07-18 07:15:28,054 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-18 07:15:28,054 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-18 07:15:28,055 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-18 07:15:28,055 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-18 07:15:28,056 INFO [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41293,1689664501013; all regions closed. 2023-07-18 07:15:28,074 DEBUG [RS:0;jenkins-hbase4:41293] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/oldWALs 2023-07-18 07:15:28,074 INFO [RS:0;jenkins-hbase4:41293] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41293%2C1689664501013.meta:.meta(num 1689664503506) 2023-07-18 07:15:28,082 DEBUG [RS:0;jenkins-hbase4:41293] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/oldWALs 2023-07-18 07:15:28,082 INFO [RS:0;jenkins-hbase4:41293] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41293%2C1689664501013:(num 1689664503243) 2023-07-18 07:15:28,082 DEBUG [RS:0;jenkins-hbase4:41293] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:28,082 INFO [RS:0;jenkins-hbase4:41293] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:28,082 INFO [RS:0;jenkins-hbase4:41293] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 07:15:28,082 INFO [RS:0;jenkins-hbase4:41293] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 07:15:28,082 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 07:15:28,083 INFO [RS:0;jenkins-hbase4:41293] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 07:15:28,083 INFO [RS:0;jenkins-hbase4:41293] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 07:15:28,084 INFO [RS:0;jenkins-hbase4:41293] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41293 2023-07-18 07:15:28,088 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:28,088 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41293,1689664501013 2023-07-18 07:15:28,088 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:28,090 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41293,1689664501013] 2023-07-18 07:15:28,090 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41293,1689664501013; numProcessing=3 2023-07-18 07:15:28,091 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41293,1689664501013 already deleted, retry=false 2023-07-18 07:15:28,091 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41293,1689664501013 expired; onlineServers=1 2023-07-18 07:15:28,237 INFO [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42375,1689664504791; all regions closed. 2023-07-18 07:15:28,244 DEBUG [RS:3;jenkins-hbase4:42375] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/oldWALs 2023-07-18 07:15:28,245 INFO [RS:3;jenkins-hbase4:42375] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42375%2C1689664504791.meta:.meta(num 1689664511472) 2023-07-18 07:15:28,251 DEBUG [RS:3;jenkins-hbase4:42375] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/oldWALs 2023-07-18 07:15:28,251 INFO [RS:3;jenkins-hbase4:42375] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42375%2C1689664504791:(num 1689664505108) 2023-07-18 07:15:28,251 DEBUG [RS:3;jenkins-hbase4:42375] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:28,251 INFO [RS:3;jenkins-hbase4:42375] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:28,251 INFO [RS:3;jenkins-hbase4:42375] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 07:15:28,251 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 07:15:28,252 INFO [RS:3;jenkins-hbase4:42375] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42375 2023-07-18 07:15:28,255 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42375,1689664504791 2023-07-18 07:15:28,255 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:28,256 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42375,1689664504791] 2023-07-18 07:15:28,257 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42375,1689664504791; numProcessing=4 2023-07-18 07:15:28,258 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42375,1689664504791 already deleted, retry=false 2023-07-18 07:15:28,258 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42375,1689664504791 expired; onlineServers=0 2023-07-18 07:15:28,258 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33141,1689664498534' ***** 2023-07-18 07:15:28,258 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 07:15:28,259 DEBUG [M:0;jenkins-hbase4:33141] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@51bb4f11, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 07:15:28,259 INFO [M:0;jenkins-hbase4:33141] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 07:15:28,262 INFO [M:0;jenkins-hbase4:33141] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@64d6f665{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-18 07:15:28,262 INFO [M:0;jenkins-hbase4:33141] server.AbstractConnector(383): Stopped ServerConnector@16cb9a38{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:28,262 INFO [M:0;jenkins-hbase4:33141] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 07:15:28,262 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 07:15:28,262 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:28,263 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 07:15:28,265 INFO [M:0;jenkins-hbase4:33141] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1b664373{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 07:15:28,266 INFO [M:0;jenkins-hbase4:33141] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@660b604c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/hadoop.log.dir/,STOPPED} 2023-07-18 07:15:28,266 INFO [M:0;jenkins-hbase4:33141] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33141,1689664498534 2023-07-18 07:15:28,266 INFO [M:0;jenkins-hbase4:33141] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33141,1689664498534; all regions closed. 2023-07-18 07:15:28,266 DEBUG [M:0;jenkins-hbase4:33141] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:28,266 INFO [M:0;jenkins-hbase4:33141] master.HMaster(1491): Stopping master jetty server 2023-07-18 07:15:28,267 INFO [M:0;jenkins-hbase4:33141] server.AbstractConnector(383): Stopped ServerConnector@7a1a3238{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:28,267 DEBUG [M:0;jenkins-hbase4:33141] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 07:15:28,268 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 07:15:28,268 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689664502855] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689664502855,5,FailOnTimeoutGroup] 2023-07-18 07:15:28,268 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689664502854] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689664502854,5,FailOnTimeoutGroup] 2023-07-18 07:15:28,268 DEBUG [M:0;jenkins-hbase4:33141] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 07:15:28,268 INFO [M:0;jenkins-hbase4:33141] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 07:15:28,268 INFO [M:0;jenkins-hbase4:33141] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 07:15:28,268 INFO [M:0;jenkins-hbase4:33141] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-18 07:15:28,268 DEBUG [M:0;jenkins-hbase4:33141] master.HMaster(1512): Stopping service threads 2023-07-18 07:15:28,268 INFO [M:0;jenkins-hbase4:33141] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 07:15:28,268 ERROR [M:0;jenkins-hbase4:33141] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-18 07:15:28,269 INFO [M:0;jenkins-hbase4:33141] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 07:15:28,269 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 07:15:28,270 DEBUG [M:0;jenkins-hbase4:33141] zookeeper.ZKUtil(398): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 07:15:28,270 WARN [M:0;jenkins-hbase4:33141] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 07:15:28,270 INFO [M:0;jenkins-hbase4:33141] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 07:15:28,270 INFO [M:0;jenkins-hbase4:33141] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 07:15:28,270 DEBUG [M:0;jenkins-hbase4:33141] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 07:15:28,270 INFO [M:0;jenkins-hbase4:33141] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:28,270 DEBUG [M:0;jenkins-hbase4:33141] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:28,270 DEBUG [M:0;jenkins-hbase4:33141] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 07:15:28,270 DEBUG [M:0;jenkins-hbase4:33141] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:28,270 INFO [M:0;jenkins-hbase4:33141] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.01 KB heapSize=621.10 KB 2023-07-18 07:15:28,287 INFO [M:0;jenkins-hbase4:33141] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.01 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/794fa9e8422e4ccc9b5844f4d4b23972 2023-07-18 07:15:28,295 DEBUG [M:0;jenkins-hbase4:33141] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/794fa9e8422e4ccc9b5844f4d4b23972 as hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/794fa9e8422e4ccc9b5844f4d4b23972 2023-07-18 07:15:28,298 INFO [RS:0;jenkins-hbase4:41293] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41293,1689664501013; zookeeper connection closed. 2023-07-18 07:15:28,298 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:28,298 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:41293-0x1017748aa600001, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:28,303 INFO [M:0;jenkins-hbase4:33141] regionserver.HStore(1080): Added hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/794fa9e8422e4ccc9b5844f4d4b23972, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-18 07:15:28,304 INFO [M:0;jenkins-hbase4:33141] regionserver.HRegion(2948): Finished flush of dataSize ~519.01 KB/531468, heapSize ~621.09 KB/635992, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 34ms, sequenceid=1152, compaction requested=false 2023-07-18 07:15:28,315 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2cab0006] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2cab0006 2023-07-18 07:15:28,316 INFO [M:0;jenkins-hbase4:33141] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:28,317 DEBUG [M:0;jenkins-hbase4:33141] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 07:15:28,324 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 07:15:28,324 INFO [M:0;jenkins-hbase4:33141] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 07:15:28,325 INFO [M:0;jenkins-hbase4:33141] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33141 2023-07-18 07:15:28,327 DEBUG [M:0;jenkins-hbase4:33141] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,33141,1689664498534 already deleted, retry=false 2023-07-18 07:15:28,398 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:28,398 INFO [RS:1;jenkins-hbase4:33769] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33769,1689664501155; zookeeper connection closed. 2023-07-18 07:15:28,398 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:33769-0x1017748aa600002, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:28,398 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@41f73062] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@41f73062 2023-07-18 07:15:28,498 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:28,498 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:39465-0x1017748aa600003, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:28,498 INFO [RS:2;jenkins-hbase4:39465] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39465,1689664501221; zookeeper connection closed. 2023-07-18 07:15:28,499 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6c0d2f55] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6c0d2f55 2023-07-18 07:15:28,598 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:28,599 INFO [M:0;jenkins-hbase4:33141] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33141,1689664498534; zookeeper connection closed. 2023-07-18 07:15:28,599 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): master:33141-0x1017748aa600000, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:28,699 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:28,699 INFO [RS:3;jenkins-hbase4:42375] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42375,1689664504791; zookeeper connection closed. 2023-07-18 07:15:28,699 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): regionserver:42375-0x1017748aa60000b, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:28,699 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@c6b944] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@c6b944 2023-07-18 07:15:28,699 INFO [Listener at localhost/33473] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-18 07:15:28,700 WARN [Listener at localhost/33473] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 07:15:28,710 INFO [Listener at localhost/33473] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 07:15:28,714 WARN [BP-1810549220-172.31.14.131-1689664494510 heartbeating to localhost/127.0.0.1:42711] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 07:15:28,714 WARN [BP-1810549220-172.31.14.131-1689664494510 heartbeating to localhost/127.0.0.1:42711] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1810549220-172.31.14.131-1689664494510 (Datanode Uuid 6fea3d87-1566-4324-bbdb-4df95f4bb34d) service to localhost/127.0.0.1:42711 2023-07-18 07:15:28,716 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/cluster_358b183c-ed7f-22d8-bc16-6d714b54baf2/dfs/data/data5/current/BP-1810549220-172.31.14.131-1689664494510] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:28,717 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/cluster_358b183c-ed7f-22d8-bc16-6d714b54baf2/dfs/data/data6/current/BP-1810549220-172.31.14.131-1689664494510] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:28,720 WARN [Listener at localhost/33473] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 07:15:28,729 INFO [Listener at localhost/33473] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 07:15:28,771 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 07:15:28,772 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 07:15:28,772 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 07:15:28,832 WARN [BP-1810549220-172.31.14.131-1689664494510 heartbeating to localhost/127.0.0.1:42711] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 07:15:28,833 WARN [BP-1810549220-172.31.14.131-1689664494510 heartbeating to localhost/127.0.0.1:42711] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1810549220-172.31.14.131-1689664494510 (Datanode Uuid 779d76c0-b477-40b2-aa4b-0597835d3a42) service to localhost/127.0.0.1:42711 2023-07-18 07:15:28,833 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/cluster_358b183c-ed7f-22d8-bc16-6d714b54baf2/dfs/data/data3/current/BP-1810549220-172.31.14.131-1689664494510] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:28,834 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/cluster_358b183c-ed7f-22d8-bc16-6d714b54baf2/dfs/data/data4/current/BP-1810549220-172.31.14.131-1689664494510] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:28,835 WARN [Listener at localhost/33473] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 07:15:28,837 INFO [Listener at localhost/33473] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 07:15:28,941 WARN [BP-1810549220-172.31.14.131-1689664494510 heartbeating to localhost/127.0.0.1:42711] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 07:15:28,942 WARN [BP-1810549220-172.31.14.131-1689664494510 heartbeating to localhost/127.0.0.1:42711] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1810549220-172.31.14.131-1689664494510 (Datanode Uuid 1325a98d-4e2c-49c8-aeb4-3983ab019f05) service to localhost/127.0.0.1:42711 2023-07-18 07:15:28,942 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/cluster_358b183c-ed7f-22d8-bc16-6d714b54baf2/dfs/data/data1/current/BP-1810549220-172.31.14.131-1689664494510] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:28,943 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/cluster_358b183c-ed7f-22d8-bc16-6d714b54baf2/dfs/data/data2/current/BP-1810549220-172.31.14.131-1689664494510] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:28,978 INFO [Listener at localhost/33473] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 07:15:29,097 INFO [Listener at localhost/33473] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 07:15:29,147 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-18 07:15:29,147 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 07:15:29,147 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/hadoop.log.dir so I do NOT create it in target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe 2023-07-18 07:15:29,147 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e961d498-1851-5904-bc8e-304e92220a29/hadoop.tmp.dir so I do NOT create it in target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe 2023-07-18 07:15:29,147 INFO [Listener at localhost/33473] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/cluster_67a26294-75e5-871c-79cb-8d23fdc88534, deleteOnExit=true 2023-07-18 07:15:29,147 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 07:15:29,148 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/test.cache.data in system properties and HBase conf 2023-07-18 07:15:29,148 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 07:15:29,148 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/hadoop.log.dir in system properties and HBase conf 2023-07-18 07:15:29,148 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 07:15:29,148 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 07:15:29,148 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 07:15:29,148 DEBUG [Listener at localhost/33473] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 07:15:29,148 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 07:15:29,149 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 07:15:29,149 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 07:15:29,149 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 07:15:29,149 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 07:15:29,149 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 07:15:29,149 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 07:15:29,149 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 07:15:29,149 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 07:15:29,149 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/nfs.dump.dir in system properties and HBase conf 2023-07-18 07:15:29,149 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/java.io.tmpdir in system properties and HBase conf 2023-07-18 07:15:29,150 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 07:15:29,150 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 07:15:29,150 INFO [Listener at localhost/33473] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 07:15:29,154 WARN [Listener at localhost/33473] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 07:15:29,154 WARN [Listener at localhost/33473] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 07:15:29,194 DEBUG [Listener at localhost/33473-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1017748aa60000a, quorum=127.0.0.1:57245, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-18 07:15:29,194 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1017748aa60000a, quorum=127.0.0.1:57245, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-18 07:15:29,198 WARN [Listener at localhost/33473] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-18 07:15:29,250 WARN [Listener at localhost/33473] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 07:15:29,253 INFO [Listener at localhost/33473] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 07:15:29,261 INFO [Listener at localhost/33473] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/java.io.tmpdir/Jetty_localhost_45169_hdfs____l3t7p9/webapp 2023-07-18 07:15:29,357 INFO [Listener at localhost/33473] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45169 2023-07-18 07:15:29,362 WARN [Listener at localhost/33473] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 07:15:29,362 WARN [Listener at localhost/33473] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 07:15:29,414 WARN [Listener at localhost/39713] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 07:15:29,431 WARN [Listener at localhost/39713] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 07:15:29,434 WARN [Listener at localhost/39713] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 07:15:29,435 INFO [Listener at localhost/39713] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 07:15:29,444 INFO [Listener at localhost/39713] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/java.io.tmpdir/Jetty_localhost_39565_datanode____.yzxzez/webapp 2023-07-18 07:15:29,550 INFO [Listener at localhost/39713] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39565 2023-07-18 07:15:29,558 WARN [Listener at localhost/39277] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 07:15:29,584 WARN [Listener at localhost/39277] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 07:15:29,586 WARN [Listener at localhost/39277] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 07:15:29,587 INFO [Listener at localhost/39277] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 07:15:29,591 INFO [Listener at localhost/39277] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/java.io.tmpdir/Jetty_localhost_41211_datanode____.tom06c/webapp 2023-07-18 07:15:29,693 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcad59a02d0fac8ae: Processing first storage report for DS-6358a9be-5143-409f-b9a9-f8d28c3e6f6f from datanode e33ad5f5-f0af-40f6-ad47-609c0b54b740 2023-07-18 07:15:29,693 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcad59a02d0fac8ae: from storage DS-6358a9be-5143-409f-b9a9-f8d28c3e6f6f node DatanodeRegistration(127.0.0.1:36697, datanodeUuid=e33ad5f5-f0af-40f6-ad47-609c0b54b740, infoPort=45387, infoSecurePort=0, ipcPort=39277, storageInfo=lv=-57;cid=testClusterID;nsid=1654175781;c=1689664529157), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 07:15:29,693 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xcad59a02d0fac8ae: Processing first storage report for DS-b6b1d3ea-671c-4ebb-a337-e6288abc02d0 from datanode e33ad5f5-f0af-40f6-ad47-609c0b54b740 2023-07-18 07:15:29,693 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xcad59a02d0fac8ae: from storage DS-b6b1d3ea-671c-4ebb-a337-e6288abc02d0 node DatanodeRegistration(127.0.0.1:36697, datanodeUuid=e33ad5f5-f0af-40f6-ad47-609c0b54b740, infoPort=45387, infoSecurePort=0, ipcPort=39277, storageInfo=lv=-57;cid=testClusterID;nsid=1654175781;c=1689664529157), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 07:15:29,712 INFO [Listener at localhost/39277] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41211 2023-07-18 07:15:29,721 WARN [Listener at localhost/42411] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 07:15:29,746 WARN [Listener at localhost/42411] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 07:15:29,750 WARN [Listener at localhost/42411] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 07:15:29,752 INFO [Listener at localhost/42411] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 07:15:29,763 INFO [Listener at localhost/42411] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/java.io.tmpdir/Jetty_localhost_42337_datanode____bzzz7k/webapp 2023-07-18 07:15:29,853 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa067581a6a565c03: Processing first storage report for DS-4d051853-7636-4ca7-b1f9-bc42bb0affd1 from datanode 57602f4b-0757-44b1-bc18-17a87fd5a918 2023-07-18 07:15:29,853 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa067581a6a565c03: from storage DS-4d051853-7636-4ca7-b1f9-bc42bb0affd1 node DatanodeRegistration(127.0.0.1:45071, datanodeUuid=57602f4b-0757-44b1-bc18-17a87fd5a918, infoPort=40963, infoSecurePort=0, ipcPort=42411, storageInfo=lv=-57;cid=testClusterID;nsid=1654175781;c=1689664529157), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 07:15:29,853 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa067581a6a565c03: Processing first storage report for DS-e46aa16f-50ea-40b7-9825-970f7c7836c6 from datanode 57602f4b-0757-44b1-bc18-17a87fd5a918 2023-07-18 07:15:29,853 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa067581a6a565c03: from storage DS-e46aa16f-50ea-40b7-9825-970f7c7836c6 node DatanodeRegistration(127.0.0.1:45071, datanodeUuid=57602f4b-0757-44b1-bc18-17a87fd5a918, infoPort=40963, infoSecurePort=0, ipcPort=42411, storageInfo=lv=-57;cid=testClusterID;nsid=1654175781;c=1689664529157), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 07:15:29,904 INFO [Listener at localhost/42411] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42337 2023-07-18 07:15:29,914 WARN [Listener at localhost/44381] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 07:15:30,027 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x18efcce1cf20d6a6: Processing first storage report for DS-536fbf22-45cd-413f-b38b-b2b8ab74c1f3 from datanode 466feb48-3a2c-4ded-9833-34d2037c2ee0 2023-07-18 07:15:30,027 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x18efcce1cf20d6a6: from storage DS-536fbf22-45cd-413f-b38b-b2b8ab74c1f3 node DatanodeRegistration(127.0.0.1:42991, datanodeUuid=466feb48-3a2c-4ded-9833-34d2037c2ee0, infoPort=38161, infoSecurePort=0, ipcPort=44381, storageInfo=lv=-57;cid=testClusterID;nsid=1654175781;c=1689664529157), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 07:15:30,027 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x18efcce1cf20d6a6: Processing first storage report for DS-50831050-eef5-4d63-9507-bb7af56929f4 from datanode 466feb48-3a2c-4ded-9833-34d2037c2ee0 2023-07-18 07:15:30,027 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x18efcce1cf20d6a6: from storage DS-50831050-eef5-4d63-9507-bb7af56929f4 node DatanodeRegistration(127.0.0.1:42991, datanodeUuid=466feb48-3a2c-4ded-9833-34d2037c2ee0, infoPort=38161, infoSecurePort=0, ipcPort=44381, storageInfo=lv=-57;cid=testClusterID;nsid=1654175781;c=1689664529157), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 07:15:30,033 DEBUG [Listener at localhost/44381] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe 2023-07-18 07:15:30,038 INFO [Listener at localhost/44381] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/cluster_67a26294-75e5-871c-79cb-8d23fdc88534/zookeeper_0, clientPort=57544, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/cluster_67a26294-75e5-871c-79cb-8d23fdc88534/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/cluster_67a26294-75e5-871c-79cb-8d23fdc88534/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 07:15:30,039 INFO [Listener at localhost/44381] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57544 2023-07-18 07:15:30,040 INFO [Listener at localhost/44381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:30,041 INFO [Listener at localhost/44381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:30,063 INFO [Listener at localhost/44381] util.FSUtils(471): Created version file at hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250 with version=8 2023-07-18 07:15:30,064 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/hbase-staging 2023-07-18 07:15:30,065 DEBUG [Listener at localhost/44381] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 07:15:30,065 DEBUG [Listener at localhost/44381] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 07:15:30,065 DEBUG [Listener at localhost/44381] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 07:15:30,065 DEBUG [Listener at localhost/44381] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 07:15:30,066 INFO [Listener at localhost/44381] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 07:15:30,066 INFO [Listener at localhost/44381] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:30,066 INFO [Listener at localhost/44381] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:30,066 INFO [Listener at localhost/44381] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 07:15:30,066 INFO [Listener at localhost/44381] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:30,067 INFO [Listener at localhost/44381] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 07:15:30,067 INFO [Listener at localhost/44381] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 07:15:30,069 INFO [Listener at localhost/44381] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45091 2023-07-18 07:15:30,070 INFO [Listener at localhost/44381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:30,071 INFO [Listener at localhost/44381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:30,072 INFO [Listener at localhost/44381] zookeeper.RecoverableZooKeeper(93): Process identifier=master:45091 connecting to ZooKeeper ensemble=127.0.0.1:57544 2023-07-18 07:15:30,080 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:450910x0, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:30,081 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:45091-0x101774929760000 connected 2023-07-18 07:15:30,097 DEBUG [Listener at localhost/44381] zookeeper.ZKUtil(164): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 07:15:30,097 DEBUG [Listener at localhost/44381] zookeeper.ZKUtil(164): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:30,098 DEBUG [Listener at localhost/44381] zookeeper.ZKUtil(164): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 07:15:30,099 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45091 2023-07-18 07:15:30,102 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45091 2023-07-18 07:15:30,104 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45091 2023-07-18 07:15:30,105 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45091 2023-07-18 07:15:30,106 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45091 2023-07-18 07:15:30,108 INFO [Listener at localhost/44381] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 07:15:30,108 INFO [Listener at localhost/44381] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 07:15:30,108 INFO [Listener at localhost/44381] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 07:15:30,109 INFO [Listener at localhost/44381] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 07:15:30,109 INFO [Listener at localhost/44381] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 07:15:30,109 INFO [Listener at localhost/44381] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 07:15:30,109 INFO [Listener at localhost/44381] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 07:15:30,110 INFO [Listener at localhost/44381] http.HttpServer(1146): Jetty bound to port 36125 2023-07-18 07:15:30,110 INFO [Listener at localhost/44381] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:30,116 INFO [Listener at localhost/44381] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:30,117 INFO [Listener at localhost/44381] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@71745bed{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/hadoop.log.dir/,AVAILABLE} 2023-07-18 07:15:30,117 INFO [Listener at localhost/44381] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:30,117 INFO [Listener at localhost/44381] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1839aa60{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 07:15:30,124 INFO [Listener at localhost/44381] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 07:15:30,125 INFO [Listener at localhost/44381] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 07:15:30,125 INFO [Listener at localhost/44381] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 07:15:30,126 INFO [Listener at localhost/44381] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 07:15:30,127 INFO [Listener at localhost/44381] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:30,128 INFO [Listener at localhost/44381] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@39b24fff{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-18 07:15:30,129 INFO [Listener at localhost/44381] server.AbstractConnector(333): Started ServerConnector@6a76b1f2{HTTP/1.1, (http/1.1)}{0.0.0.0:36125} 2023-07-18 07:15:30,129 INFO [Listener at localhost/44381] server.Server(415): Started @37697ms 2023-07-18 07:15:30,129 INFO [Listener at localhost/44381] master.HMaster(444): hbase.rootdir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250, hbase.cluster.distributed=false 2023-07-18 07:15:30,144 INFO [Listener at localhost/44381] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 07:15:30,144 INFO [Listener at localhost/44381] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:30,144 INFO [Listener at localhost/44381] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:30,144 INFO [Listener at localhost/44381] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 07:15:30,144 INFO [Listener at localhost/44381] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:30,144 INFO [Listener at localhost/44381] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 07:15:30,144 INFO [Listener at localhost/44381] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 07:15:30,147 INFO [Listener at localhost/44381] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37561 2023-07-18 07:15:30,147 INFO [Listener at localhost/44381] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 07:15:30,150 DEBUG [Listener at localhost/44381] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 07:15:30,150 INFO [Listener at localhost/44381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:30,151 INFO [Listener at localhost/44381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:30,152 INFO [Listener at localhost/44381] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37561 connecting to ZooKeeper ensemble=127.0.0.1:57544 2023-07-18 07:15:30,155 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:375610x0, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:30,156 DEBUG [Listener at localhost/44381] zookeeper.ZKUtil(164): regionserver:375610x0, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 07:15:30,157 DEBUG [Listener at localhost/44381] zookeeper.ZKUtil(164): regionserver:375610x0, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:30,157 DEBUG [Listener at localhost/44381] zookeeper.ZKUtil(164): regionserver:375610x0, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 07:15:30,159 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37561-0x101774929760001 connected 2023-07-18 07:15:30,159 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37561 2023-07-18 07:15:30,162 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37561 2023-07-18 07:15:30,163 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37561 2023-07-18 07:15:30,164 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37561 2023-07-18 07:15:30,166 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37561 2023-07-18 07:15:30,168 INFO [Listener at localhost/44381] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 07:15:30,168 INFO [Listener at localhost/44381] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 07:15:30,168 INFO [Listener at localhost/44381] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 07:15:30,169 INFO [Listener at localhost/44381] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 07:15:30,169 INFO [Listener at localhost/44381] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 07:15:30,169 INFO [Listener at localhost/44381] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 07:15:30,169 INFO [Listener at localhost/44381] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 07:15:30,170 INFO [Listener at localhost/44381] http.HttpServer(1146): Jetty bound to port 35261 2023-07-18 07:15:30,170 INFO [Listener at localhost/44381] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:30,175 INFO [Listener at localhost/44381] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:30,175 INFO [Listener at localhost/44381] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@13121dc7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/hadoop.log.dir/,AVAILABLE} 2023-07-18 07:15:30,175 INFO [Listener at localhost/44381] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:30,176 INFO [Listener at localhost/44381] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@df9e121{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 07:15:30,184 INFO [Listener at localhost/44381] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 07:15:30,185 INFO [Listener at localhost/44381] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 07:15:30,185 INFO [Listener at localhost/44381] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 07:15:30,185 INFO [Listener at localhost/44381] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 07:15:30,186 INFO [Listener at localhost/44381] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:30,187 INFO [Listener at localhost/44381] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@fc97682{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:30,188 INFO [Listener at localhost/44381] server.AbstractConnector(333): Started ServerConnector@39054181{HTTP/1.1, (http/1.1)}{0.0.0.0:35261} 2023-07-18 07:15:30,189 INFO [Listener at localhost/44381] server.Server(415): Started @37757ms 2023-07-18 07:15:30,207 INFO [Listener at localhost/44381] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 07:15:30,207 INFO [Listener at localhost/44381] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:30,207 INFO [Listener at localhost/44381] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:30,207 INFO [Listener at localhost/44381] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 07:15:30,207 INFO [Listener at localhost/44381] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:30,208 INFO [Listener at localhost/44381] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 07:15:30,208 INFO [Listener at localhost/44381] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 07:15:30,209 INFO [Listener at localhost/44381] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37347 2023-07-18 07:15:30,209 INFO [Listener at localhost/44381] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 07:15:30,211 DEBUG [Listener at localhost/44381] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 07:15:30,211 INFO [Listener at localhost/44381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:30,212 INFO [Listener at localhost/44381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:30,213 INFO [Listener at localhost/44381] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37347 connecting to ZooKeeper ensemble=127.0.0.1:57544 2023-07-18 07:15:30,219 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:373470x0, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:30,221 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37347-0x101774929760002 connected 2023-07-18 07:15:30,221 DEBUG [Listener at localhost/44381] zookeeper.ZKUtil(164): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 07:15:30,221 DEBUG [Listener at localhost/44381] zookeeper.ZKUtil(164): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:30,222 DEBUG [Listener at localhost/44381] zookeeper.ZKUtil(164): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 07:15:30,223 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37347 2023-07-18 07:15:30,223 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37347 2023-07-18 07:15:30,223 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37347 2023-07-18 07:15:30,224 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37347 2023-07-18 07:15:30,225 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37347 2023-07-18 07:15:30,226 INFO [Listener at localhost/44381] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 07:15:30,227 INFO [Listener at localhost/44381] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 07:15:30,227 INFO [Listener at localhost/44381] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 07:15:30,227 INFO [Listener at localhost/44381] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 07:15:30,227 INFO [Listener at localhost/44381] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 07:15:30,227 INFO [Listener at localhost/44381] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 07:15:30,227 INFO [Listener at localhost/44381] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 07:15:30,228 INFO [Listener at localhost/44381] http.HttpServer(1146): Jetty bound to port 35611 2023-07-18 07:15:30,228 INFO [Listener at localhost/44381] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:30,231 INFO [Listener at localhost/44381] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:30,231 INFO [Listener at localhost/44381] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@490fef5a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/hadoop.log.dir/,AVAILABLE} 2023-07-18 07:15:30,231 INFO [Listener at localhost/44381] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:30,232 INFO [Listener at localhost/44381] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@20973997{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 07:15:30,238 INFO [Listener at localhost/44381] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 07:15:30,240 INFO [Listener at localhost/44381] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 07:15:30,240 INFO [Listener at localhost/44381] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 07:15:30,240 INFO [Listener at localhost/44381] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 07:15:30,241 INFO [Listener at localhost/44381] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:30,242 INFO [Listener at localhost/44381] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4d24713f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:30,244 INFO [Listener at localhost/44381] server.AbstractConnector(333): Started ServerConnector@7f57ce5d{HTTP/1.1, (http/1.1)}{0.0.0.0:35611} 2023-07-18 07:15:30,244 INFO [Listener at localhost/44381] server.Server(415): Started @37812ms 2023-07-18 07:15:30,256 INFO [Listener at localhost/44381] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 07:15:30,256 INFO [Listener at localhost/44381] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:30,256 INFO [Listener at localhost/44381] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:30,256 INFO [Listener at localhost/44381] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 07:15:30,256 INFO [Listener at localhost/44381] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:30,256 INFO [Listener at localhost/44381] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 07:15:30,256 INFO [Listener at localhost/44381] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 07:15:30,257 INFO [Listener at localhost/44381] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44321 2023-07-18 07:15:30,258 INFO [Listener at localhost/44381] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 07:15:30,260 DEBUG [Listener at localhost/44381] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 07:15:30,260 INFO [Listener at localhost/44381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:30,261 INFO [Listener at localhost/44381] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:30,263 INFO [Listener at localhost/44381] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44321 connecting to ZooKeeper ensemble=127.0.0.1:57544 2023-07-18 07:15:30,266 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:443210x0, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:30,268 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44321-0x101774929760003 connected 2023-07-18 07:15:30,268 DEBUG [Listener at localhost/44381] zookeeper.ZKUtil(164): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 07:15:30,268 DEBUG [Listener at localhost/44381] zookeeper.ZKUtil(164): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:30,269 DEBUG [Listener at localhost/44381] zookeeper.ZKUtil(164): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 07:15:30,270 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44321 2023-07-18 07:15:30,270 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44321 2023-07-18 07:15:30,271 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44321 2023-07-18 07:15:30,271 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44321 2023-07-18 07:15:30,271 DEBUG [Listener at localhost/44381] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44321 2023-07-18 07:15:30,273 INFO [Listener at localhost/44381] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 07:15:30,273 INFO [Listener at localhost/44381] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 07:15:30,273 INFO [Listener at localhost/44381] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 07:15:30,274 INFO [Listener at localhost/44381] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 07:15:30,274 INFO [Listener at localhost/44381] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 07:15:30,274 INFO [Listener at localhost/44381] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 07:15:30,274 INFO [Listener at localhost/44381] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 07:15:30,275 INFO [Listener at localhost/44381] http.HttpServer(1146): Jetty bound to port 44241 2023-07-18 07:15:30,275 INFO [Listener at localhost/44381] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:30,276 INFO [Listener at localhost/44381] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:30,276 INFO [Listener at localhost/44381] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3bece4cf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/hadoop.log.dir/,AVAILABLE} 2023-07-18 07:15:30,277 INFO [Listener at localhost/44381] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:30,277 INFO [Listener at localhost/44381] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2a3eb869{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 07:15:30,283 INFO [Listener at localhost/44381] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 07:15:30,284 INFO [Listener at localhost/44381] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 07:15:30,284 INFO [Listener at localhost/44381] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 07:15:30,284 INFO [Listener at localhost/44381] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 07:15:30,285 INFO [Listener at localhost/44381] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:30,286 INFO [Listener at localhost/44381] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3f001a9a{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:30,288 INFO [Listener at localhost/44381] server.AbstractConnector(333): Started ServerConnector@4c21c98c{HTTP/1.1, (http/1.1)}{0.0.0.0:44241} 2023-07-18 07:15:30,288 INFO [Listener at localhost/44381] server.Server(415): Started @37856ms 2023-07-18 07:15:30,291 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:30,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@50abe707{HTTP/1.1, (http/1.1)}{0.0.0.0:42115} 2023-07-18 07:15:30,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @37866ms 2023-07-18 07:15:30,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,45091,1689664530065 2023-07-18 07:15:30,300 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 07:15:30,300 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,45091,1689664530065 2023-07-18 07:15:30,301 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 07:15:30,301 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:37561-0x101774929760001, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 07:15:30,301 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 07:15:30,302 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 07:15:30,302 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:30,303 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 07:15:30,305 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,45091,1689664530065 from backup master directory 2023-07-18 07:15:30,305 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 07:15:30,307 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,45091,1689664530065 2023-07-18 07:15:30,308 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 07:15:30,308 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 07:15:30,308 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,45091,1689664530065 2023-07-18 07:15:30,326 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/hbase.id with ID: c3952cdd-2fe8-43f4-bc32-20cc520e8fcd 2023-07-18 07:15:30,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:30,341 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:30,352 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x12a0e162 to 127.0.0.1:57544 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:30,357 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@30db274a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:30,357 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:30,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 07:15:30,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:30,360 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/MasterData/data/master/store-tmp 2023-07-18 07:15:30,369 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:30,370 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 07:15:30,370 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:30,370 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:30,370 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 07:15:30,370 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:30,370 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:30,370 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 07:15:30,370 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/MasterData/WALs/jenkins-hbase4.apache.org,45091,1689664530065 2023-07-18 07:15:30,373 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45091%2C1689664530065, suffix=, logDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/MasterData/WALs/jenkins-hbase4.apache.org,45091,1689664530065, archiveDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/MasterData/oldWALs, maxLogs=10 2023-07-18 07:15:30,391 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45071,DS-4d051853-7636-4ca7-b1f9-bc42bb0affd1,DISK] 2023-07-18 07:15:30,391 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36697,DS-6358a9be-5143-409f-b9a9-f8d28c3e6f6f,DISK] 2023-07-18 07:15:30,392 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42991,DS-536fbf22-45cd-413f-b38b-b2b8ab74c1f3,DISK] 2023-07-18 07:15:30,395 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/MasterData/WALs/jenkins-hbase4.apache.org,45091,1689664530065/jenkins-hbase4.apache.org%2C45091%2C1689664530065.1689664530373 2023-07-18 07:15:30,396 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45071,DS-4d051853-7636-4ca7-b1f9-bc42bb0affd1,DISK], DatanodeInfoWithStorage[127.0.0.1:36697,DS-6358a9be-5143-409f-b9a9-f8d28c3e6f6f,DISK], DatanodeInfoWithStorage[127.0.0.1:42991,DS-536fbf22-45cd-413f-b38b-b2b8ab74c1f3,DISK]] 2023-07-18 07:15:30,397 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:30,397 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:30,397 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:30,397 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:30,399 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:30,439 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 07:15:30,440 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 07:15:30,441 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:30,441 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:30,442 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:30,445 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:30,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:30,447 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9876458720, jitterRate=-0.08018310368061066}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:30,447 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 07:15:30,448 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 07:15:30,449 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 07:15:30,449 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 07:15:30,449 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 07:15:30,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-18 07:15:30,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-18 07:15:30,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 07:15:30,451 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 07:15:30,452 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 07:15:30,453 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 07:15:30,453 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 07:15:30,453 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 07:15:30,456 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:30,456 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 07:15:30,456 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 07:15:30,457 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 07:15:30,459 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:30,459 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:37561-0x101774929760001, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:30,460 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:30,460 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:30,459 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:30,460 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,45091,1689664530065, sessionid=0x101774929760000, setting cluster-up flag (Was=false) 2023-07-18 07:15:30,465 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:30,469 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 07:15:30,470 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45091,1689664530065 2023-07-18 07:15:30,473 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:30,478 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 07:15:30,480 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45091,1689664530065 2023-07-18 07:15:30,481 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.hbase-snapshot/.tmp 2023-07-18 07:15:30,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 07:15:30,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 07:15:30,484 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 07:15:30,485 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45091,1689664530065] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 07:15:30,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 07:15:30,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-18 07:15:30,487 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 07:15:30,491 INFO [RS:0;jenkins-hbase4:37561] regionserver.HRegionServer(951): ClusterId : c3952cdd-2fe8-43f4-bc32-20cc520e8fcd 2023-07-18 07:15:30,496 DEBUG [RS:0;jenkins-hbase4:37561] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 07:15:30,499 INFO [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(951): ClusterId : c3952cdd-2fe8-43f4-bc32-20cc520e8fcd 2023-07-18 07:15:30,500 DEBUG [RS:2;jenkins-hbase4:44321] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 07:15:30,506 INFO [RS:1;jenkins-hbase4:37347] regionserver.HRegionServer(951): ClusterId : c3952cdd-2fe8-43f4-bc32-20cc520e8fcd 2023-07-18 07:15:30,506 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 07:15:30,507 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 07:15:30,508 DEBUG [RS:1;jenkins-hbase4:37347] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 07:15:30,508 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 07:15:30,508 DEBUG [RS:0;jenkins-hbase4:37561] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 07:15:30,508 DEBUG [RS:0;jenkins-hbase4:37561] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 07:15:30,508 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 07:15:30,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 07:15:30,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 07:15:30,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 07:15:30,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 07:15:30,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 07:15:30,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 07:15:30,509 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,512 DEBUG [RS:2;jenkins-hbase4:44321] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 07:15:30,512 DEBUG [RS:2;jenkins-hbase4:44321] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 07:15:30,512 DEBUG [RS:1;jenkins-hbase4:37347] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 07:15:30,512 DEBUG [RS:1;jenkins-hbase4:37347] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 07:15:30,514 DEBUG [RS:0;jenkins-hbase4:37561] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 07:15:30,515 DEBUG [RS:2;jenkins-hbase4:44321] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 07:15:30,515 DEBUG [RS:1;jenkins-hbase4:37347] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 07:15:30,520 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689664560520 2023-07-18 07:15:30,520 DEBUG [RS:0;jenkins-hbase4:37561] zookeeper.ReadOnlyZKClient(139): Connect 0x5dbe7c66 to 127.0.0.1:57544 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:30,520 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 07:15:30,520 DEBUG [RS:2;jenkins-hbase4:44321] zookeeper.ReadOnlyZKClient(139): Connect 0x2a416279 to 127.0.0.1:57544 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:30,520 DEBUG [RS:1;jenkins-hbase4:37347] zookeeper.ReadOnlyZKClient(139): Connect 0x1cc7aaed to 127.0.0.1:57544 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:30,521 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 07:15:30,521 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 07:15:30,521 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 07:15:30,521 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 07:15:30,521 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 07:15:30,523 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 07:15:30,523 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 07:15:30,524 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:30,526 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,535 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 07:15:30,535 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 07:15:30,535 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 07:15:30,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 07:15:30,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 07:15:30,548 DEBUG [RS:1;jenkins-hbase4:37347] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@12005705, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:30,548 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689664530547,5,FailOnTimeoutGroup] 2023-07-18 07:15:30,548 DEBUG [RS:1;jenkins-hbase4:37347] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@163fc19d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 07:15:30,548 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689664530548,5,FailOnTimeoutGroup] 2023-07-18 07:15:30,548 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,548 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 07:15:30,548 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,549 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,552 DEBUG [RS:0;jenkins-hbase4:37561] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@57461716, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:30,552 DEBUG [RS:2;jenkins-hbase4:44321] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@20b90909, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:30,552 DEBUG [RS:0;jenkins-hbase4:37561] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1244839a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 07:15:30,552 DEBUG [RS:2;jenkins-hbase4:44321] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@65bb837a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 07:15:30,562 DEBUG [RS:1;jenkins-hbase4:37347] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:37347 2023-07-18 07:15:30,562 INFO [RS:1;jenkins-hbase4:37347] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 07:15:30,562 INFO [RS:1;jenkins-hbase4:37347] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 07:15:30,562 DEBUG [RS:1;jenkins-hbase4:37347] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 07:15:30,563 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:30,563 INFO [RS:1;jenkins-hbase4:37347] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45091,1689664530065 with isa=jenkins-hbase4.apache.org/172.31.14.131:37347, startcode=1689664530206 2023-07-18 07:15:30,564 DEBUG [RS:0;jenkins-hbase4:37561] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37561 2023-07-18 07:15:30,564 DEBUG [RS:1;jenkins-hbase4:37347] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 07:15:30,564 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:30,564 INFO [RS:0;jenkins-hbase4:37561] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 07:15:30,564 INFO [RS:0;jenkins-hbase4:37561] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 07:15:30,564 DEBUG [RS:0;jenkins-hbase4:37561] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 07:15:30,564 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250 2023-07-18 07:15:30,564 DEBUG [RS:2;jenkins-hbase4:44321] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:44321 2023-07-18 07:15:30,564 INFO [RS:2;jenkins-hbase4:44321] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 07:15:30,564 INFO [RS:2;jenkins-hbase4:44321] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 07:15:30,564 DEBUG [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 07:15:30,565 INFO [RS:0;jenkins-hbase4:37561] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45091,1689664530065 with isa=jenkins-hbase4.apache.org/172.31.14.131:37561, startcode=1689664530143 2023-07-18 07:15:30,565 INFO [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45091,1689664530065 with isa=jenkins-hbase4.apache.org/172.31.14.131:44321, startcode=1689664530255 2023-07-18 07:15:30,565 DEBUG [RS:0;jenkins-hbase4:37561] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 07:15:30,565 DEBUG [RS:2;jenkins-hbase4:44321] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 07:15:30,567 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36217, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 07:15:30,572 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45091] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37347,1689664530206 2023-07-18 07:15:30,572 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45091,1689664530065] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 07:15:30,573 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45091,1689664530065] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 07:15:30,573 DEBUG [RS:1;jenkins-hbase4:37347] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250 2023-07-18 07:15:30,573 DEBUG [RS:1;jenkins-hbase4:37347] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39713 2023-07-18 07:15:30,573 DEBUG [RS:1;jenkins-hbase4:37347] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36125 2023-07-18 07:15:30,573 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43279, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 07:15:30,574 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58779, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 07:15:30,574 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45091] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37561,1689664530143 2023-07-18 07:15:30,574 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45091,1689664530065] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 07:15:30,574 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45091,1689664530065] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 07:15:30,574 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45091] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:30,574 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45091,1689664530065] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 07:15:30,574 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45091,1689664530065] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 07:15:30,575 DEBUG [RS:0;jenkins-hbase4:37561] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250 2023-07-18 07:15:30,575 DEBUG [RS:0;jenkins-hbase4:37561] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39713 2023-07-18 07:15:30,575 DEBUG [RS:0;jenkins-hbase4:37561] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36125 2023-07-18 07:15:30,575 DEBUG [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250 2023-07-18 07:15:30,575 DEBUG [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39713 2023-07-18 07:15:30,575 DEBUG [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36125 2023-07-18 07:15:30,576 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:30,581 DEBUG [RS:0;jenkins-hbase4:37561] zookeeper.ZKUtil(162): regionserver:37561-0x101774929760001, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37561,1689664530143 2023-07-18 07:15:30,581 WARN [RS:0;jenkins-hbase4:37561] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 07:15:30,581 INFO [RS:0;jenkins-hbase4:37561] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:30,581 DEBUG [RS:0;jenkins-hbase4:37561] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/WALs/jenkins-hbase4.apache.org,37561,1689664530143 2023-07-18 07:15:30,581 DEBUG [RS:2;jenkins-hbase4:44321] zookeeper.ZKUtil(162): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:30,581 DEBUG [RS:1;jenkins-hbase4:37347] zookeeper.ZKUtil(162): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37347,1689664530206 2023-07-18 07:15:30,581 WARN [RS:2;jenkins-hbase4:44321] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 07:15:30,581 WARN [RS:1;jenkins-hbase4:37347] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 07:15:30,581 INFO [RS:2;jenkins-hbase4:44321] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:30,581 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37561,1689664530143] 2023-07-18 07:15:30,581 DEBUG [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/WALs/jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:30,581 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44321,1689664530255] 2023-07-18 07:15:30,581 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37347,1689664530206] 2023-07-18 07:15:30,581 INFO [RS:1;jenkins-hbase4:37347] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:30,581 DEBUG [RS:1;jenkins-hbase4:37347] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/WALs/jenkins-hbase4.apache.org,37347,1689664530206 2023-07-18 07:15:30,595 DEBUG [RS:1;jenkins-hbase4:37347] zookeeper.ZKUtil(162): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37561,1689664530143 2023-07-18 07:15:30,595 DEBUG [RS:0;jenkins-hbase4:37561] zookeeper.ZKUtil(162): regionserver:37561-0x101774929760001, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37561,1689664530143 2023-07-18 07:15:30,595 DEBUG [RS:1;jenkins-hbase4:37347] zookeeper.ZKUtil(162): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:30,595 DEBUG [RS:2;jenkins-hbase4:44321] zookeeper.ZKUtil(162): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37561,1689664530143 2023-07-18 07:15:30,595 DEBUG [RS:0;jenkins-hbase4:37561] zookeeper.ZKUtil(162): regionserver:37561-0x101774929760001, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:30,596 DEBUG [RS:1;jenkins-hbase4:37347] zookeeper.ZKUtil(162): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37347,1689664530206 2023-07-18 07:15:30,596 DEBUG [RS:2;jenkins-hbase4:44321] zookeeper.ZKUtil(162): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:30,596 DEBUG [RS:0;jenkins-hbase4:37561] zookeeper.ZKUtil(162): regionserver:37561-0x101774929760001, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37347,1689664530206 2023-07-18 07:15:30,597 DEBUG [RS:2;jenkins-hbase4:44321] zookeeper.ZKUtil(162): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37347,1689664530206 2023-07-18 07:15:30,598 DEBUG [RS:0;jenkins-hbase4:37561] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 07:15:30,598 DEBUG [RS:1;jenkins-hbase4:37347] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 07:15:30,598 INFO [RS:0;jenkins-hbase4:37561] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 07:15:30,598 INFO [RS:1;jenkins-hbase4:37347] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 07:15:30,598 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:30,598 DEBUG [RS:2;jenkins-hbase4:44321] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 07:15:30,599 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 07:15:30,599 INFO [RS:0;jenkins-hbase4:37561] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 07:15:30,600 INFO [RS:0;jenkins-hbase4:37561] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 07:15:30,600 INFO [RS:0;jenkins-hbase4:37561] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,601 INFO [RS:2;jenkins-hbase4:44321] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 07:15:30,601 INFO [RS:0;jenkins-hbase4:37561] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 07:15:30,601 INFO [RS:1;jenkins-hbase4:37347] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 07:15:30,601 INFO [RS:1;jenkins-hbase4:37347] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 07:15:30,601 INFO [RS:1;jenkins-hbase4:37347] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,601 INFO [RS:1;jenkins-hbase4:37347] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 07:15:30,603 INFO [RS:2;jenkins-hbase4:44321] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 07:15:30,603 INFO [RS:0;jenkins-hbase4:37561] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,603 INFO [RS:1;jenkins-hbase4:37347] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,603 DEBUG [RS:0;jenkins-hbase4:37561] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,603 INFO [RS:2;jenkins-hbase4:44321] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 07:15:30,604 DEBUG [RS:0;jenkins-hbase4:37561] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,604 INFO [RS:2;jenkins-hbase4:44321] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,604 DEBUG [RS:0;jenkins-hbase4:37561] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,604 DEBUG [RS:1;jenkins-hbase4:37347] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,604 DEBUG [RS:0;jenkins-hbase4:37561] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,604 DEBUG [RS:1;jenkins-hbase4:37347] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,604 DEBUG [RS:0;jenkins-hbase4:37561] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,604 DEBUG [RS:1;jenkins-hbase4:37347] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,604 DEBUG [RS:0;jenkins-hbase4:37561] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 07:15:30,604 DEBUG [RS:1;jenkins-hbase4:37347] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,604 DEBUG [RS:0;jenkins-hbase4:37561] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,604 DEBUG [RS:1;jenkins-hbase4:37347] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,604 INFO [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 07:15:30,604 DEBUG [RS:1;jenkins-hbase4:37347] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 07:15:30,604 DEBUG [RS:0;jenkins-hbase4:37561] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,605 DEBUG [RS:1;jenkins-hbase4:37347] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,605 DEBUG [RS:0;jenkins-hbase4:37561] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,605 DEBUG [RS:1;jenkins-hbase4:37347] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,605 DEBUG [RS:0;jenkins-hbase4:37561] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,605 DEBUG [RS:1;jenkins-hbase4:37347] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,605 DEBUG [RS:1;jenkins-hbase4:37347] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,606 INFO [RS:0;jenkins-hbase4:37561] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,606 INFO [RS:0;jenkins-hbase4:37561] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,607 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/info 2023-07-18 07:15:30,607 INFO [RS:0;jenkins-hbase4:37561] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,607 INFO [RS:0;jenkins-hbase4:37561] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,607 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 07:15:30,608 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:30,608 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 07:15:30,609 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/rep_barrier 2023-07-18 07:15:30,610 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 07:15:30,610 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:30,610 INFO [RS:1;jenkins-hbase4:37347] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,611 INFO [RS:2;jenkins-hbase4:44321] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,611 INFO [RS:1;jenkins-hbase4:37347] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,611 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 07:15:30,612 INFO [RS:1;jenkins-hbase4:37347] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,612 DEBUG [RS:2;jenkins-hbase4:44321] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,612 INFO [RS:1;jenkins-hbase4:37347] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,612 DEBUG [RS:2;jenkins-hbase4:44321] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,612 DEBUG [RS:2;jenkins-hbase4:44321] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,612 DEBUG [RS:2;jenkins-hbase4:44321] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,612 DEBUG [RS:2;jenkins-hbase4:44321] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,613 DEBUG [RS:2;jenkins-hbase4:44321] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 07:15:30,613 DEBUG [RS:2;jenkins-hbase4:44321] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,613 DEBUG [RS:2;jenkins-hbase4:44321] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,613 DEBUG [RS:2;jenkins-hbase4:44321] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,613 DEBUG [RS:2;jenkins-hbase4:44321] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:30,613 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/table 2023-07-18 07:15:30,614 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 07:15:30,614 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:30,615 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740 2023-07-18 07:15:30,615 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740 2023-07-18 07:15:30,617 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 07:15:30,619 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 07:15:30,623 INFO [RS:2;jenkins-hbase4:44321] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,623 INFO [RS:2;jenkins-hbase4:44321] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,624 INFO [RS:2;jenkins-hbase4:44321] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,624 INFO [RS:2;jenkins-hbase4:44321] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,630 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:30,630 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11009448960, jitterRate=0.025334835052490234}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 07:15:30,630 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 07:15:30,630 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 07:15:30,630 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 07:15:30,630 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 07:15:30,631 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 07:15:30,631 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 07:15:30,633 INFO [RS:0;jenkins-hbase4:37561] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 07:15:30,633 INFO [RS:0;jenkins-hbase4:37561] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37561,1689664530143-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,634 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 07:15:30,634 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 07:15:30,635 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 07:15:30,635 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 07:15:30,635 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 07:15:30,635 INFO [RS:1;jenkins-hbase4:37347] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 07:15:30,635 INFO [RS:1;jenkins-hbase4:37347] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37347,1689664530206-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,636 INFO [RS:2;jenkins-hbase4:44321] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 07:15:30,636 INFO [RS:2;jenkins-hbase4:44321] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44321,1689664530255-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,637 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 07:15:30,639 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 07:15:30,649 INFO [RS:2;jenkins-hbase4:44321] regionserver.Replication(203): jenkins-hbase4.apache.org,44321,1689664530255 started 2023-07-18 07:15:30,649 INFO [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44321,1689664530255, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44321, sessionid=0x101774929760003 2023-07-18 07:15:30,649 DEBUG [RS:2;jenkins-hbase4:44321] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 07:15:30,649 DEBUG [RS:2;jenkins-hbase4:44321] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:30,649 DEBUG [RS:2;jenkins-hbase4:44321] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44321,1689664530255' 2023-07-18 07:15:30,649 DEBUG [RS:2;jenkins-hbase4:44321] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 07:15:30,650 DEBUG [RS:2;jenkins-hbase4:44321] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 07:15:30,650 DEBUG [RS:2;jenkins-hbase4:44321] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 07:15:30,650 DEBUG [RS:2;jenkins-hbase4:44321] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 07:15:30,650 DEBUG [RS:2;jenkins-hbase4:44321] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:30,650 DEBUG [RS:2;jenkins-hbase4:44321] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44321,1689664530255' 2023-07-18 07:15:30,650 DEBUG [RS:2;jenkins-hbase4:44321] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 07:15:30,651 DEBUG [RS:2;jenkins-hbase4:44321] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 07:15:30,651 DEBUG [RS:2;jenkins-hbase4:44321] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 07:15:30,651 INFO [RS:2;jenkins-hbase4:44321] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 07:15:30,653 INFO [RS:1;jenkins-hbase4:37347] regionserver.Replication(203): jenkins-hbase4.apache.org,37347,1689664530206 started 2023-07-18 07:15:30,653 INFO [RS:1;jenkins-hbase4:37347] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37347,1689664530206, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37347, sessionid=0x101774929760002 2023-07-18 07:15:30,653 INFO [RS:0;jenkins-hbase4:37561] regionserver.Replication(203): jenkins-hbase4.apache.org,37561,1689664530143 started 2023-07-18 07:15:30,653 DEBUG [RS:1;jenkins-hbase4:37347] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 07:15:30,653 DEBUG [RS:1;jenkins-hbase4:37347] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37347,1689664530206 2023-07-18 07:15:30,653 INFO [RS:0;jenkins-hbase4:37561] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37561,1689664530143, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37561, sessionid=0x101774929760001 2023-07-18 07:15:30,653 DEBUG [RS:1;jenkins-hbase4:37347] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37347,1689664530206' 2023-07-18 07:15:30,653 DEBUG [RS:1;jenkins-hbase4:37347] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 07:15:30,653 DEBUG [RS:0;jenkins-hbase4:37561] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 07:15:30,653 DEBUG [RS:0;jenkins-hbase4:37561] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37561,1689664530143 2023-07-18 07:15:30,653 DEBUG [RS:0;jenkins-hbase4:37561] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37561,1689664530143' 2023-07-18 07:15:30,653 DEBUG [RS:0;jenkins-hbase4:37561] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 07:15:30,653 INFO [RS:2;jenkins-hbase4:44321] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,654 DEBUG [RS:1;jenkins-hbase4:37347] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 07:15:30,654 DEBUG [RS:0;jenkins-hbase4:37561] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 07:15:30,654 DEBUG [RS:2;jenkins-hbase4:44321] zookeeper.ZKUtil(398): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 07:15:30,654 INFO [RS:2;jenkins-hbase4:44321] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 07:15:30,654 DEBUG [RS:0;jenkins-hbase4:37561] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 07:15:30,654 DEBUG [RS:1;jenkins-hbase4:37347] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 07:15:30,654 DEBUG [RS:0;jenkins-hbase4:37561] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 07:15:30,654 DEBUG [RS:1;jenkins-hbase4:37347] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 07:15:30,654 DEBUG [RS:0;jenkins-hbase4:37561] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37561,1689664530143 2023-07-18 07:15:30,654 DEBUG [RS:0;jenkins-hbase4:37561] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37561,1689664530143' 2023-07-18 07:15:30,654 DEBUG [RS:0;jenkins-hbase4:37561] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 07:15:30,654 DEBUG [RS:1;jenkins-hbase4:37347] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37347,1689664530206 2023-07-18 07:15:30,654 DEBUG [RS:1;jenkins-hbase4:37347] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37347,1689664530206' 2023-07-18 07:15:30,654 DEBUG [RS:1;jenkins-hbase4:37347] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 07:15:30,654 INFO [RS:2;jenkins-hbase4:44321] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,655 DEBUG [RS:0;jenkins-hbase4:37561] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 07:15:30,655 INFO [RS:2;jenkins-hbase4:44321] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,655 DEBUG [RS:1;jenkins-hbase4:37347] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 07:15:30,655 DEBUG [RS:0;jenkins-hbase4:37561] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 07:15:30,655 INFO [RS:0;jenkins-hbase4:37561] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 07:15:30,655 INFO [RS:0;jenkins-hbase4:37561] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,655 DEBUG [RS:1;jenkins-hbase4:37347] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 07:15:30,655 INFO [RS:1;jenkins-hbase4:37347] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-18 07:15:30,655 INFO [RS:1;jenkins-hbase4:37347] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,655 DEBUG [RS:0;jenkins-hbase4:37561] zookeeper.ZKUtil(398): regionserver:37561-0x101774929760001, quorum=127.0.0.1:57544, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 07:15:30,656 INFO [RS:0;jenkins-hbase4:37561] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 07:15:30,656 DEBUG [RS:1;jenkins-hbase4:37347] zookeeper.ZKUtil(398): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-18 07:15:30,656 INFO [RS:1;jenkins-hbase4:37347] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-18 07:15:30,656 INFO [RS:0;jenkins-hbase4:37561] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,656 INFO [RS:1;jenkins-hbase4:37347] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,656 INFO [RS:0;jenkins-hbase4:37561] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,656 INFO [RS:1;jenkins-hbase4:37347] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:30,758 INFO [RS:1;jenkins-hbase4:37347] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37347%2C1689664530206, suffix=, logDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/WALs/jenkins-hbase4.apache.org,37347,1689664530206, archiveDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/oldWALs, maxLogs=32 2023-07-18 07:15:30,758 INFO [RS:2;jenkins-hbase4:44321] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44321%2C1689664530255, suffix=, logDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/WALs/jenkins-hbase4.apache.org,44321,1689664530255, archiveDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/oldWALs, maxLogs=32 2023-07-18 07:15:30,758 INFO [RS:0;jenkins-hbase4:37561] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37561%2C1689664530143, suffix=, logDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/WALs/jenkins-hbase4.apache.org,37561,1689664530143, archiveDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/oldWALs, maxLogs=32 2023-07-18 07:15:30,779 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45071,DS-4d051853-7636-4ca7-b1f9-bc42bb0affd1,DISK] 2023-07-18 07:15:30,779 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36697,DS-6358a9be-5143-409f-b9a9-f8d28c3e6f6f,DISK] 2023-07-18 07:15:30,779 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42991,DS-536fbf22-45cd-413f-b38b-b2b8ab74c1f3,DISK] 2023-07-18 07:15:30,785 INFO [RS:1;jenkins-hbase4:37347] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/WALs/jenkins-hbase4.apache.org,37347,1689664530206/jenkins-hbase4.apache.org%2C37347%2C1689664530206.1689664530760 2023-07-18 07:15:30,787 DEBUG [RS:1;jenkins-hbase4:37347] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45071,DS-4d051853-7636-4ca7-b1f9-bc42bb0affd1,DISK], DatanodeInfoWithStorage[127.0.0.1:36697,DS-6358a9be-5143-409f-b9a9-f8d28c3e6f6f,DISK], DatanodeInfoWithStorage[127.0.0.1:42991,DS-536fbf22-45cd-413f-b38b-b2b8ab74c1f3,DISK]] 2023-07-18 07:15:30,789 DEBUG [jenkins-hbase4:45091] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 07:15:30,790 DEBUG [jenkins-hbase4:45091] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:30,790 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36697,DS-6358a9be-5143-409f-b9a9-f8d28c3e6f6f,DISK] 2023-07-18 07:15:30,790 DEBUG [jenkins-hbase4:45091] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:30,790 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45071,DS-4d051853-7636-4ca7-b1f9-bc42bb0affd1,DISK] 2023-07-18 07:15:30,790 DEBUG [jenkins-hbase4:45091] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:30,790 DEBUG [jenkins-hbase4:45091] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:30,790 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42991,DS-536fbf22-45cd-413f-b38b-b2b8ab74c1f3,DISK] 2023-07-18 07:15:30,790 DEBUG [jenkins-hbase4:45091] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:30,792 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44321,1689664530255, state=OPENING 2023-07-18 07:15:30,792 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45071,DS-4d051853-7636-4ca7-b1f9-bc42bb0affd1,DISK] 2023-07-18 07:15:30,793 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36697,DS-6358a9be-5143-409f-b9a9-f8d28c3e6f6f,DISK] 2023-07-18 07:15:30,793 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42991,DS-536fbf22-45cd-413f-b38b-b2b8ab74c1f3,DISK] 2023-07-18 07:15:30,795 DEBUG [PEWorker-5] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 07:15:30,795 INFO [RS:2;jenkins-hbase4:44321] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/WALs/jenkins-hbase4.apache.org,44321,1689664530255/jenkins-hbase4.apache.org%2C44321%2C1689664530255.1689664530767 2023-07-18 07:15:30,795 INFO [RS:0;jenkins-hbase4:37561] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/WALs/jenkins-hbase4.apache.org,37561,1689664530143/jenkins-hbase4.apache.org%2C37561%2C1689664530143.1689664530767 2023-07-18 07:15:30,795 DEBUG [RS:2;jenkins-hbase4:44321] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45071,DS-4d051853-7636-4ca7-b1f9-bc42bb0affd1,DISK], DatanodeInfoWithStorage[127.0.0.1:36697,DS-6358a9be-5143-409f-b9a9-f8d28c3e6f6f,DISK], DatanodeInfoWithStorage[127.0.0.1:42991,DS-536fbf22-45cd-413f-b38b-b2b8ab74c1f3,DISK]] 2023-07-18 07:15:30,795 DEBUG [RS:0;jenkins-hbase4:37561] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36697,DS-6358a9be-5143-409f-b9a9-f8d28c3e6f6f,DISK], DatanodeInfoWithStorage[127.0.0.1:42991,DS-536fbf22-45cd-413f-b38b-b2b8ab74c1f3,DISK], DatanodeInfoWithStorage[127.0.0.1:45071,DS-4d051853-7636-4ca7-b1f9-bc42bb0affd1,DISK]] 2023-07-18 07:15:30,796 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:30,798 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44321,1689664530255}] 2023-07-18 07:15:30,798 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 07:15:30,799 WARN [ReadOnlyZKClient-127.0.0.1:57544@0x12a0e162] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-18 07:15:30,800 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45091,1689664530065] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 07:15:30,801 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40350, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 07:15:30,802 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44321] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:40350 deadline: 1689664590802, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:30,952 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:30,953 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 07:15:30,955 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40354, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 07:15:30,959 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 07:15:30,959 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:30,961 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44321%2C1689664530255.meta, suffix=.meta, logDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/WALs/jenkins-hbase4.apache.org,44321,1689664530255, archiveDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/oldWALs, maxLogs=32 2023-07-18 07:15:30,975 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36697,DS-6358a9be-5143-409f-b9a9-f8d28c3e6f6f,DISK] 2023-07-18 07:15:30,976 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45071,DS-4d051853-7636-4ca7-b1f9-bc42bb0affd1,DISK] 2023-07-18 07:15:30,977 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42991,DS-536fbf22-45cd-413f-b38b-b2b8ab74c1f3,DISK] 2023-07-18 07:15:30,978 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/WALs/jenkins-hbase4.apache.org,44321,1689664530255/jenkins-hbase4.apache.org%2C44321%2C1689664530255.meta.1689664530961.meta 2023-07-18 07:15:30,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36697,DS-6358a9be-5143-409f-b9a9-f8d28c3e6f6f,DISK], DatanodeInfoWithStorage[127.0.0.1:42991,DS-536fbf22-45cd-413f-b38b-b2b8ab74c1f3,DISK], DatanodeInfoWithStorage[127.0.0.1:45071,DS-4d051853-7636-4ca7-b1f9-bc42bb0affd1,DISK]] 2023-07-18 07:15:30,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:30,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 07:15:30,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 07:15:30,980 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 07:15:30,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 07:15:30,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:30,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 07:15:30,980 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 07:15:30,982 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 07:15:30,983 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/info 2023-07-18 07:15:30,983 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/info 2023-07-18 07:15:30,983 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 07:15:30,984 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:30,984 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 07:15:30,985 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/rep_barrier 2023-07-18 07:15:30,985 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/rep_barrier 2023-07-18 07:15:30,985 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 07:15:30,986 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:30,986 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 07:15:30,987 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/table 2023-07-18 07:15:30,987 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/table 2023-07-18 07:15:30,987 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 07:15:30,988 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:30,988 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740 2023-07-18 07:15:30,989 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740 2023-07-18 07:15:30,991 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 07:15:30,992 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 07:15:30,993 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10865172000, jitterRate=0.011897996068000793}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 07:15:30,993 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 07:15:30,994 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689664530952 2023-07-18 07:15:30,998 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 07:15:30,999 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 07:15:30,999 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44321,1689664530255, state=OPEN 2023-07-18 07:15:31,001 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 07:15:31,001 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 07:15:31,002 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 07:15:31,003 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44321,1689664530255 in 205 msec 2023-07-18 07:15:31,004 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 07:15:31,004 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 367 msec 2023-07-18 07:15:31,005 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 518 msec 2023-07-18 07:15:31,006 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689664531005, completionTime=-1 2023-07-18 07:15:31,006 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 07:15:31,006 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 07:15:31,009 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 07:15:31,009 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689664591009 2023-07-18 07:15:31,009 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689664651009 2023-07-18 07:15:31,009 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-18 07:15:31,015 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45091,1689664530065-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:31,015 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45091,1689664530065-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:31,015 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45091,1689664530065-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:31,015 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:45091, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:31,015 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:31,015 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 07:15:31,015 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:31,016 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 07:15:31,016 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 07:15:31,017 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:31,017 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 07:15:31,019 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/hbase/namespace/e58c166379c92472d3b2261a4ddc054f 2023-07-18 07:15:31,019 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/hbase/namespace/e58c166379c92472d3b2261a4ddc054f empty. 2023-07-18 07:15:31,020 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/hbase/namespace/e58c166379c92472d3b2261a4ddc054f 2023-07-18 07:15:31,020 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 07:15:31,031 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:31,032 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => e58c166379c92472d3b2261a4ddc054f, NAME => 'hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp 2023-07-18 07:15:31,040 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:31,040 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing e58c166379c92472d3b2261a4ddc054f, disabling compactions & flushes 2023-07-18 07:15:31,041 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f. 2023-07-18 07:15:31,041 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f. 2023-07-18 07:15:31,041 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f. after waiting 0 ms 2023-07-18 07:15:31,041 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f. 2023-07-18 07:15:31,041 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f. 2023-07-18 07:15:31,041 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for e58c166379c92472d3b2261a4ddc054f: 2023-07-18 07:15:31,043 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 07:15:31,044 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689664531043"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664531043"}]},"ts":"1689664531043"} 2023-07-18 07:15:31,046 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 07:15:31,046 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 07:15:31,046 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664531046"}]},"ts":"1689664531046"} 2023-07-18 07:15:31,047 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 07:15:31,051 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:31,052 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:31,052 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:31,052 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:31,052 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:31,052 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e58c166379c92472d3b2261a4ddc054f, ASSIGN}] 2023-07-18 07:15:31,054 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e58c166379c92472d3b2261a4ddc054f, ASSIGN 2023-07-18 07:15:31,055 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e58c166379c92472d3b2261a4ddc054f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44321,1689664530255; forceNewPlan=false, retain=false 2023-07-18 07:15:31,105 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45091,1689664530065] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:31,107 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45091,1689664530065] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 07:15:31,108 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:31,109 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 07:15:31,111 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/hbase/rsgroup/b49f57bdede1bc5da2682c0eb9ee388b 2023-07-18 07:15:31,111 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/hbase/rsgroup/b49f57bdede1bc5da2682c0eb9ee388b empty. 2023-07-18 07:15:31,112 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/hbase/rsgroup/b49f57bdede1bc5da2682c0eb9ee388b 2023-07-18 07:15:31,112 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 07:15:31,123 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:31,124 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => b49f57bdede1bc5da2682c0eb9ee388b, NAME => 'hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp 2023-07-18 07:15:31,134 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:31,134 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing b49f57bdede1bc5da2682c0eb9ee388b, disabling compactions & flushes 2023-07-18 07:15:31,134 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b. 2023-07-18 07:15:31,134 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b. 2023-07-18 07:15:31,134 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b. after waiting 0 ms 2023-07-18 07:15:31,134 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b. 2023-07-18 07:15:31,134 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b. 2023-07-18 07:15:31,134 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for b49f57bdede1bc5da2682c0eb9ee388b: 2023-07-18 07:15:31,137 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 07:15:31,137 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689664531137"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664531137"}]},"ts":"1689664531137"} 2023-07-18 07:15:31,139 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 07:15:31,139 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 07:15:31,139 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664531139"}]},"ts":"1689664531139"} 2023-07-18 07:15:31,141 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 07:15:31,144 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:31,144 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:31,144 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:31,144 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:31,144 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:31,144 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=b49f57bdede1bc5da2682c0eb9ee388b, ASSIGN}] 2023-07-18 07:15:31,148 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=b49f57bdede1bc5da2682c0eb9ee388b, ASSIGN 2023-07-18 07:15:31,148 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=b49f57bdede1bc5da2682c0eb9ee388b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44321,1689664530255; forceNewPlan=false, retain=false 2023-07-18 07:15:31,149 INFO [jenkins-hbase4:45091] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-18 07:15:31,150 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e58c166379c92472d3b2261a4ddc054f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:31,151 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689664531150"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664531150"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664531150"}]},"ts":"1689664531150"} 2023-07-18 07:15:31,151 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=b49f57bdede1bc5da2682c0eb9ee388b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:31,151 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689664531151"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664531151"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664531151"}]},"ts":"1689664531151"} 2023-07-18 07:15:31,152 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure e58c166379c92472d3b2261a4ddc054f, server=jenkins-hbase4.apache.org,44321,1689664530255}] 2023-07-18 07:15:31,152 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure b49f57bdede1bc5da2682c0eb9ee388b, server=jenkins-hbase4.apache.org,44321,1689664530255}] 2023-07-18 07:15:31,307 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f. 2023-07-18 07:15:31,307 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e58c166379c92472d3b2261a4ddc054f, NAME => 'hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:31,308 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e58c166379c92472d3b2261a4ddc054f 2023-07-18 07:15:31,308 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:31,308 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e58c166379c92472d3b2261a4ddc054f 2023-07-18 07:15:31,308 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e58c166379c92472d3b2261a4ddc054f 2023-07-18 07:15:31,309 INFO [StoreOpener-e58c166379c92472d3b2261a4ddc054f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e58c166379c92472d3b2261a4ddc054f 2023-07-18 07:15:31,311 DEBUG [StoreOpener-e58c166379c92472d3b2261a4ddc054f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/namespace/e58c166379c92472d3b2261a4ddc054f/info 2023-07-18 07:15:31,311 DEBUG [StoreOpener-e58c166379c92472d3b2261a4ddc054f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/namespace/e58c166379c92472d3b2261a4ddc054f/info 2023-07-18 07:15:31,311 INFO [StoreOpener-e58c166379c92472d3b2261a4ddc054f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e58c166379c92472d3b2261a4ddc054f columnFamilyName info 2023-07-18 07:15:31,312 INFO [StoreOpener-e58c166379c92472d3b2261a4ddc054f-1] regionserver.HStore(310): Store=e58c166379c92472d3b2261a4ddc054f/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:31,312 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/namespace/e58c166379c92472d3b2261a4ddc054f 2023-07-18 07:15:31,313 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/namespace/e58c166379c92472d3b2261a4ddc054f 2023-07-18 07:15:31,315 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e58c166379c92472d3b2261a4ddc054f 2023-07-18 07:15:31,317 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/namespace/e58c166379c92472d3b2261a4ddc054f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:31,317 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e58c166379c92472d3b2261a4ddc054f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10841199680, jitterRate=0.00966539978981018}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:31,318 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e58c166379c92472d3b2261a4ddc054f: 2023-07-18 07:15:31,318 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f., pid=8, masterSystemTime=1689664531303 2023-07-18 07:15:31,320 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f. 2023-07-18 07:15:31,320 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f. 2023-07-18 07:15:31,321 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b. 2023-07-18 07:15:31,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b49f57bdede1bc5da2682c0eb9ee388b, NAME => 'hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:31,321 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e58c166379c92472d3b2261a4ddc054f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:31,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 07:15:31,321 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689664531321"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664531321"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664531321"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664531321"}]},"ts":"1689664531321"} 2023-07-18 07:15:31,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b. service=MultiRowMutationService 2023-07-18 07:15:31,321 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 07:15:31,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup b49f57bdede1bc5da2682c0eb9ee388b 2023-07-18 07:15:31,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:31,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b49f57bdede1bc5da2682c0eb9ee388b 2023-07-18 07:15:31,321 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b49f57bdede1bc5da2682c0eb9ee388b 2023-07-18 07:15:31,323 INFO [StoreOpener-b49f57bdede1bc5da2682c0eb9ee388b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region b49f57bdede1bc5da2682c0eb9ee388b 2023-07-18 07:15:31,324 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-18 07:15:31,324 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure e58c166379c92472d3b2261a4ddc054f, server=jenkins-hbase4.apache.org,44321,1689664530255 in 170 msec 2023-07-18 07:15:31,324 DEBUG [StoreOpener-b49f57bdede1bc5da2682c0eb9ee388b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/rsgroup/b49f57bdede1bc5da2682c0eb9ee388b/m 2023-07-18 07:15:31,324 DEBUG [StoreOpener-b49f57bdede1bc5da2682c0eb9ee388b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/rsgroup/b49f57bdede1bc5da2682c0eb9ee388b/m 2023-07-18 07:15:31,325 INFO [StoreOpener-b49f57bdede1bc5da2682c0eb9ee388b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b49f57bdede1bc5da2682c0eb9ee388b columnFamilyName m 2023-07-18 07:15:31,325 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-18 07:15:31,325 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e58c166379c92472d3b2261a4ddc054f, ASSIGN in 272 msec 2023-07-18 07:15:31,326 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 07:15:31,326 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664531326"}]},"ts":"1689664531326"} 2023-07-18 07:15:31,327 INFO [StoreOpener-b49f57bdede1bc5da2682c0eb9ee388b-1] regionserver.HStore(310): Store=b49f57bdede1bc5da2682c0eb9ee388b/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:31,327 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 07:15:31,327 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/rsgroup/b49f57bdede1bc5da2682c0eb9ee388b 2023-07-18 07:15:31,328 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/rsgroup/b49f57bdede1bc5da2682c0eb9ee388b 2023-07-18 07:15:31,329 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 07:15:31,331 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b49f57bdede1bc5da2682c0eb9ee388b 2023-07-18 07:15:31,331 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 315 msec 2023-07-18 07:15:31,333 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/rsgroup/b49f57bdede1bc5da2682c0eb9ee388b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:31,333 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b49f57bdede1bc5da2682c0eb9ee388b; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@72238b90, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:31,334 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b49f57bdede1bc5da2682c0eb9ee388b: 2023-07-18 07:15:31,334 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b., pid=9, masterSystemTime=1689664531303 2023-07-18 07:15:31,335 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b. 2023-07-18 07:15:31,335 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b. 2023-07-18 07:15:31,336 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=b49f57bdede1bc5da2682c0eb9ee388b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:31,336 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689664531336"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664531336"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664531336"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664531336"}]},"ts":"1689664531336"} 2023-07-18 07:15:31,338 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-18 07:15:31,338 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure b49f57bdede1bc5da2682c0eb9ee388b, server=jenkins-hbase4.apache.org,44321,1689664530255 in 185 msec 2023-07-18 07:15:31,340 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-18 07:15:31,340 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=b49f57bdede1bc5da2682c0eb9ee388b, ASSIGN in 194 msec 2023-07-18 07:15:31,340 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 07:15:31,340 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664531340"}]},"ts":"1689664531340"} 2023-07-18 07:15:31,341 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 07:15:31,347 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 07:15:31,349 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 242 msec 2023-07-18 07:15:31,410 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45091,1689664530065] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 07:15:31,410 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45091,1689664530065] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 07:15:31,414 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:31,414 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45091,1689664530065] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:31,416 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45091,1689664530065] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 07:15:31,417 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45091,1689664530065] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 07:15:31,417 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 07:15:31,418 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 07:15:31,418 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:31,422 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 07:15:31,429 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 07:15:31,431 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 8 msec 2023-07-18 07:15:31,433 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 07:15:31,440 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 07:15:31,443 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-18 07:15:31,447 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 07:15:31,450 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 07:15:31,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.142sec 2023-07-18 07:15:31,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-18 07:15:31,450 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:31,451 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-18 07:15:31,451 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-18 07:15:31,453 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:31,454 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 07:15:31,454 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-18 07:15:31,455 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/hbase/quota/b3ffb5838e6e26bfc237f43e2ca098c5 2023-07-18 07:15:31,456 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/hbase/quota/b3ffb5838e6e26bfc237f43e2ca098c5 empty. 2023-07-18 07:15:31,456 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/hbase/quota/b3ffb5838e6e26bfc237f43e2ca098c5 2023-07-18 07:15:31,456 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-18 07:15:31,459 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-18 07:15:31,460 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-18 07:15:31,462 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:31,462 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:31,462 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 07:15:31,463 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 07:15:31,463 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45091,1689664530065-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 07:15:31,463 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45091,1689664530065-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 07:15:31,463 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 07:15:31,468 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:31,470 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => b3ffb5838e6e26bfc237f43e2ca098c5, NAME => 'hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp 2023-07-18 07:15:31,482 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:31,482 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing b3ffb5838e6e26bfc237f43e2ca098c5, disabling compactions & flushes 2023-07-18 07:15:31,482 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5. 2023-07-18 07:15:31,482 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5. 2023-07-18 07:15:31,482 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5. after waiting 0 ms 2023-07-18 07:15:31,482 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5. 2023-07-18 07:15:31,483 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5. 2023-07-18 07:15:31,483 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for b3ffb5838e6e26bfc237f43e2ca098c5: 2023-07-18 07:15:31,486 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 07:15:31,487 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689664531486"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664531486"}]},"ts":"1689664531486"} 2023-07-18 07:15:31,488 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 07:15:31,489 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 07:15:31,489 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664531489"}]},"ts":"1689664531489"} 2023-07-18 07:15:31,490 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-18 07:15:31,493 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:31,494 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:31,494 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:31,494 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:31,494 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:31,494 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=b3ffb5838e6e26bfc237f43e2ca098c5, ASSIGN}] 2023-07-18 07:15:31,495 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=b3ffb5838e6e26bfc237f43e2ca098c5, ASSIGN 2023-07-18 07:15:31,496 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=b3ffb5838e6e26bfc237f43e2ca098c5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44321,1689664530255; forceNewPlan=false, retain=false 2023-07-18 07:15:31,503 DEBUG [Listener at localhost/44381] zookeeper.ReadOnlyZKClient(139): Connect 0x600fdee6 to 127.0.0.1:57544 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:31,508 DEBUG [Listener at localhost/44381] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@157e8e2e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:31,509 DEBUG [hconnection-0x7541fe47-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 07:15:31,511 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:40368, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 07:15:31,512 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,45091,1689664530065 2023-07-18 07:15:31,512 INFO [Listener at localhost/44381] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:31,514 DEBUG [Listener at localhost/44381] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 07:15:31,516 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55670, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 07:15:31,520 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 07:15:31,520 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:31,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 07:15:31,521 DEBUG [Listener at localhost/44381] zookeeper.ReadOnlyZKClient(139): Connect 0x35a21aab to 127.0.0.1:57544 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:31,525 DEBUG [Listener at localhost/44381] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4f6b799e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:31,525 INFO [Listener at localhost/44381] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:57544 2023-07-18 07:15:31,528 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:31,528 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10177492976000a connected 2023-07-18 07:15:31,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-18 07:15:31,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-18 07:15:31,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-18 07:15:31,543 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 07:15:31,545 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 12 msec 2023-07-18 07:15:31,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-18 07:15:31,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:31,646 INFO [jenkins-hbase4:45091] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 07:15:31,647 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=b3ffb5838e6e26bfc237f43e2ca098c5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:31,647 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689664531647"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664531647"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664531647"}]},"ts":"1689664531647"} 2023-07-18 07:15:31,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-18 07:15:31,650 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure b3ffb5838e6e26bfc237f43e2ca098c5, server=jenkins-hbase4.apache.org,44321,1689664530255}] 2023-07-18 07:15:31,651 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:31,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-18 07:15:31,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 07:15:31,656 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:31,658 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 07:15:31,660 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 07:15:31,662 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/np1/table1/c768ee8eea92a2691c885dc8acf3a11d 2023-07-18 07:15:31,663 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/np1/table1/c768ee8eea92a2691c885dc8acf3a11d empty. 2023-07-18 07:15:31,663 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/np1/table1/c768ee8eea92a2691c885dc8acf3a11d 2023-07-18 07:15:31,663 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-18 07:15:31,692 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:31,694 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => c768ee8eea92a2691c885dc8acf3a11d, NAME => 'np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp 2023-07-18 07:15:31,706 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:31,706 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing c768ee8eea92a2691c885dc8acf3a11d, disabling compactions & flushes 2023-07-18 07:15:31,706 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d. 2023-07-18 07:15:31,706 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d. 2023-07-18 07:15:31,706 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d. after waiting 0 ms 2023-07-18 07:15:31,706 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d. 2023-07-18 07:15:31,706 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d. 2023-07-18 07:15:31,706 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for c768ee8eea92a2691c885dc8acf3a11d: 2023-07-18 07:15:31,709 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 07:15:31,710 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689664531710"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664531710"}]},"ts":"1689664531710"} 2023-07-18 07:15:31,711 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 07:15:31,712 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 07:15:31,712 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664531712"}]},"ts":"1689664531712"} 2023-07-18 07:15:31,713 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-18 07:15:31,718 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:31,718 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:31,718 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:31,718 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:31,718 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:31,718 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=c768ee8eea92a2691c885dc8acf3a11d, ASSIGN}] 2023-07-18 07:15:31,719 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=c768ee8eea92a2691c885dc8acf3a11d, ASSIGN 2023-07-18 07:15:31,720 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=c768ee8eea92a2691c885dc8acf3a11d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44321,1689664530255; forceNewPlan=false, retain=false 2023-07-18 07:15:31,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 07:15:31,806 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5. 2023-07-18 07:15:31,806 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b3ffb5838e6e26bfc237f43e2ca098c5, NAME => 'hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:31,807 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota b3ffb5838e6e26bfc237f43e2ca098c5 2023-07-18 07:15:31,807 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:31,807 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b3ffb5838e6e26bfc237f43e2ca098c5 2023-07-18 07:15:31,807 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b3ffb5838e6e26bfc237f43e2ca098c5 2023-07-18 07:15:31,808 INFO [StoreOpener-b3ffb5838e6e26bfc237f43e2ca098c5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region b3ffb5838e6e26bfc237f43e2ca098c5 2023-07-18 07:15:31,810 DEBUG [StoreOpener-b3ffb5838e6e26bfc237f43e2ca098c5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/quota/b3ffb5838e6e26bfc237f43e2ca098c5/q 2023-07-18 07:15:31,810 DEBUG [StoreOpener-b3ffb5838e6e26bfc237f43e2ca098c5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/quota/b3ffb5838e6e26bfc237f43e2ca098c5/q 2023-07-18 07:15:31,810 INFO [StoreOpener-b3ffb5838e6e26bfc237f43e2ca098c5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b3ffb5838e6e26bfc237f43e2ca098c5 columnFamilyName q 2023-07-18 07:15:31,811 INFO [StoreOpener-b3ffb5838e6e26bfc237f43e2ca098c5-1] regionserver.HStore(310): Store=b3ffb5838e6e26bfc237f43e2ca098c5/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:31,811 INFO [StoreOpener-b3ffb5838e6e26bfc237f43e2ca098c5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region b3ffb5838e6e26bfc237f43e2ca098c5 2023-07-18 07:15:31,813 DEBUG [StoreOpener-b3ffb5838e6e26bfc237f43e2ca098c5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/quota/b3ffb5838e6e26bfc237f43e2ca098c5/u 2023-07-18 07:15:31,813 DEBUG [StoreOpener-b3ffb5838e6e26bfc237f43e2ca098c5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/quota/b3ffb5838e6e26bfc237f43e2ca098c5/u 2023-07-18 07:15:31,813 INFO [StoreOpener-b3ffb5838e6e26bfc237f43e2ca098c5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b3ffb5838e6e26bfc237f43e2ca098c5 columnFamilyName u 2023-07-18 07:15:31,814 INFO [StoreOpener-b3ffb5838e6e26bfc237f43e2ca098c5-1] regionserver.HStore(310): Store=b3ffb5838e6e26bfc237f43e2ca098c5/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:31,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/quota/b3ffb5838e6e26bfc237f43e2ca098c5 2023-07-18 07:15:31,815 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/quota/b3ffb5838e6e26bfc237f43e2ca098c5 2023-07-18 07:15:31,824 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-18 07:15:31,825 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b3ffb5838e6e26bfc237f43e2ca098c5 2023-07-18 07:15:31,828 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/quota/b3ffb5838e6e26bfc237f43e2ca098c5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:31,829 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b3ffb5838e6e26bfc237f43e2ca098c5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10137271520, jitterRate=-0.05589301884174347}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-18 07:15:31,829 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b3ffb5838e6e26bfc237f43e2ca098c5: 2023-07-18 07:15:31,830 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5., pid=16, masterSystemTime=1689664531802 2023-07-18 07:15:31,831 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5. 2023-07-18 07:15:31,831 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5. 2023-07-18 07:15:31,832 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=b3ffb5838e6e26bfc237f43e2ca098c5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:31,832 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689664531832"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664531832"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664531832"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664531832"}]},"ts":"1689664531832"} 2023-07-18 07:15:31,835 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-18 07:15:31,835 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure b3ffb5838e6e26bfc237f43e2ca098c5, server=jenkins-hbase4.apache.org,44321,1689664530255 in 183 msec 2023-07-18 07:15:31,837 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-18 07:15:31,837 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=b3ffb5838e6e26bfc237f43e2ca098c5, ASSIGN in 341 msec 2023-07-18 07:15:31,837 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 07:15:31,837 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664531837"}]},"ts":"1689664531837"} 2023-07-18 07:15:31,838 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-18 07:15:31,841 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 07:15:31,842 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 391 msec 2023-07-18 07:15:31,870 INFO [jenkins-hbase4:45091] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 07:15:31,871 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=c768ee8eea92a2691c885dc8acf3a11d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:31,872 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689664531871"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664531871"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664531871"}]},"ts":"1689664531871"} 2023-07-18 07:15:31,874 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure c768ee8eea92a2691c885dc8acf3a11d, server=jenkins-hbase4.apache.org,44321,1689664530255}] 2023-07-18 07:15:31,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 07:15:32,034 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d. 2023-07-18 07:15:32,034 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c768ee8eea92a2691c885dc8acf3a11d, NAME => 'np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:32,034 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 c768ee8eea92a2691c885dc8acf3a11d 2023-07-18 07:15:32,034 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:32,034 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c768ee8eea92a2691c885dc8acf3a11d 2023-07-18 07:15:32,035 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c768ee8eea92a2691c885dc8acf3a11d 2023-07-18 07:15:32,036 INFO [StoreOpener-c768ee8eea92a2691c885dc8acf3a11d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region c768ee8eea92a2691c885dc8acf3a11d 2023-07-18 07:15:32,038 DEBUG [StoreOpener-c768ee8eea92a2691c885dc8acf3a11d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/np1/table1/c768ee8eea92a2691c885dc8acf3a11d/fam1 2023-07-18 07:15:32,038 DEBUG [StoreOpener-c768ee8eea92a2691c885dc8acf3a11d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/np1/table1/c768ee8eea92a2691c885dc8acf3a11d/fam1 2023-07-18 07:15:32,039 INFO [StoreOpener-c768ee8eea92a2691c885dc8acf3a11d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c768ee8eea92a2691c885dc8acf3a11d columnFamilyName fam1 2023-07-18 07:15:32,039 INFO [StoreOpener-c768ee8eea92a2691c885dc8acf3a11d-1] regionserver.HStore(310): Store=c768ee8eea92a2691c885dc8acf3a11d/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:32,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/np1/table1/c768ee8eea92a2691c885dc8acf3a11d 2023-07-18 07:15:32,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/np1/table1/c768ee8eea92a2691c885dc8acf3a11d 2023-07-18 07:15:32,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c768ee8eea92a2691c885dc8acf3a11d 2023-07-18 07:15:32,047 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/np1/table1/c768ee8eea92a2691c885dc8acf3a11d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:32,047 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c768ee8eea92a2691c885dc8acf3a11d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9458499040, jitterRate=-0.11910863220691681}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:32,048 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c768ee8eea92a2691c885dc8acf3a11d: 2023-07-18 07:15:32,048 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d., pid=18, masterSystemTime=1689664532026 2023-07-18 07:15:32,050 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d. 2023-07-18 07:15:32,050 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d. 2023-07-18 07:15:32,050 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=c768ee8eea92a2691c885dc8acf3a11d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:32,050 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689664532050"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664532050"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664532050"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664532050"}]},"ts":"1689664532050"} 2023-07-18 07:15:32,054 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-18 07:15:32,054 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure c768ee8eea92a2691c885dc8acf3a11d, server=jenkins-hbase4.apache.org,44321,1689664530255 in 178 msec 2023-07-18 07:15:32,055 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-18 07:15:32,055 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=c768ee8eea92a2691c885dc8acf3a11d, ASSIGN in 336 msec 2023-07-18 07:15:32,056 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 07:15:32,056 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664532056"}]},"ts":"1689664532056"} 2023-07-18 07:15:32,062 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-18 07:15:32,065 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 07:15:32,068 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 420 msec 2023-07-18 07:15:32,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-18 07:15:32,257 INFO [Listener at localhost/44381] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-18 07:15:32,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:32,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-18 07:15:32,263 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:32,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-18 07:15:32,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 07:15:32,283 INFO [PEWorker-3] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=23 msec 2023-07-18 07:15:32,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 07:15:32,368 INFO [Listener at localhost/44381] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-18 07:15:32,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:32,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:32,371 INFO [Listener at localhost/44381] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-18 07:15:32,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-18 07:15:32,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-18 07:15:32,376 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664532376"}]},"ts":"1689664532376"} 2023-07-18 07:15:32,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 07:15:32,377 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-18 07:15:32,379 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-18 07:15:32,379 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=c768ee8eea92a2691c885dc8acf3a11d, UNASSIGN}] 2023-07-18 07:15:32,380 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=c768ee8eea92a2691c885dc8acf3a11d, UNASSIGN 2023-07-18 07:15:32,381 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=c768ee8eea92a2691c885dc8acf3a11d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:32,381 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689664532381"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664532381"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664532381"}]},"ts":"1689664532381"} 2023-07-18 07:15:32,382 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure c768ee8eea92a2691c885dc8acf3a11d, server=jenkins-hbase4.apache.org,44321,1689664530255}] 2023-07-18 07:15:32,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 07:15:32,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c768ee8eea92a2691c885dc8acf3a11d 2023-07-18 07:15:32,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c768ee8eea92a2691c885dc8acf3a11d, disabling compactions & flushes 2023-07-18 07:15:32,535 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d. 2023-07-18 07:15:32,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d. 2023-07-18 07:15:32,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d. after waiting 0 ms 2023-07-18 07:15:32,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d. 2023-07-18 07:15:32,539 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/np1/table1/c768ee8eea92a2691c885dc8acf3a11d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:32,540 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d. 2023-07-18 07:15:32,540 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c768ee8eea92a2691c885dc8acf3a11d: 2023-07-18 07:15:32,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c768ee8eea92a2691c885dc8acf3a11d 2023-07-18 07:15:32,542 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=c768ee8eea92a2691c885dc8acf3a11d, regionState=CLOSED 2023-07-18 07:15:32,542 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689664532541"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664532541"}]},"ts":"1689664532541"} 2023-07-18 07:15:32,595 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-18 07:15:32,595 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure c768ee8eea92a2691c885dc8acf3a11d, server=jenkins-hbase4.apache.org,44321,1689664530255 in 161 msec 2023-07-18 07:15:32,597 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-18 07:15:32,597 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=c768ee8eea92a2691c885dc8acf3a11d, UNASSIGN in 216 msec 2023-07-18 07:15:32,598 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664532597"}]},"ts":"1689664532597"} 2023-07-18 07:15:32,599 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-18 07:15:32,600 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-18 07:15:32,602 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 230 msec 2023-07-18 07:15:32,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 07:15:32,679 INFO [Listener at localhost/44381] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-18 07:15:32,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-18 07:15:32,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-18 07:15:32,682 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 07:15:32,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-18 07:15:32,683 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 07:15:32,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:32,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 07:15:32,686 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/np1/table1/c768ee8eea92a2691c885dc8acf3a11d 2023-07-18 07:15:32,688 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/np1/table1/c768ee8eea92a2691c885dc8acf3a11d/fam1, FileablePath, hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/np1/table1/c768ee8eea92a2691c885dc8acf3a11d/recovered.edits] 2023-07-18 07:15:32,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-18 07:15:32,695 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/np1/table1/c768ee8eea92a2691c885dc8acf3a11d/recovered.edits/4.seqid to hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/archive/data/np1/table1/c768ee8eea92a2691c885dc8acf3a11d/recovered.edits/4.seqid 2023-07-18 07:15:32,696 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/.tmp/data/np1/table1/c768ee8eea92a2691c885dc8acf3a11d 2023-07-18 07:15:32,696 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-18 07:15:32,698 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 07:15:32,699 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-18 07:15:32,701 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-18 07:15:32,702 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 07:15:32,702 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-18 07:15:32,702 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664532702"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:32,704 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 07:15:32,704 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => c768ee8eea92a2691c885dc8acf3a11d, NAME => 'np1:table1,,1689664531644.c768ee8eea92a2691c885dc8acf3a11d.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 07:15:32,704 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-18 07:15:32,704 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689664532704"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:32,705 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-18 07:15:32,709 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-18 07:15:32,711 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 29 msec 2023-07-18 07:15:32,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-18 07:15:32,789 INFO [Listener at localhost/44381] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-18 07:15:32,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-18 07:15:32,801 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-18 07:15:32,803 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 07:15:32,806 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 07:15:32,808 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 07:15:32,809 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-18 07:15:32,809 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-18 07:15:32,809 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 07:15:32,810 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 07:15:32,812 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-18 07:15:32,813 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 18 msec 2023-07-18 07:15:32,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45091] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-18 07:15:32,910 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 07:15:32,910 INFO [Listener at localhost/44381] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 07:15:32,910 DEBUG [Listener at localhost/44381] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x600fdee6 to 127.0.0.1:57544 2023-07-18 07:15:32,910 DEBUG [Listener at localhost/44381] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:32,910 DEBUG [Listener at localhost/44381] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 07:15:32,910 DEBUG [Listener at localhost/44381] util.JVMClusterUtil(257): Found active master hash=118763584, stopped=false 2023-07-18 07:15:32,911 DEBUG [Listener at localhost/44381] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 07:15:32,911 DEBUG [Listener at localhost/44381] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 07:15:32,911 DEBUG [Listener at localhost/44381] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-18 07:15:32,911 INFO [Listener at localhost/44381] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,45091,1689664530065 2023-07-18 07:15:32,912 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:32,912 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:32,912 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:32,912 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:32,912 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:37561-0x101774929760001, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:32,912 INFO [Listener at localhost/44381] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 07:15:32,913 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:32,913 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:32,913 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:32,913 INFO [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(1064): Closing user regions 2023-07-18 07:15:32,913 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37561-0x101774929760001, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:32,913 INFO [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(3305): Received CLOSE for b49f57bdede1bc5da2682c0eb9ee388b 2023-07-18 07:15:32,915 INFO [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(3305): Received CLOSE for e58c166379c92472d3b2261a4ddc054f 2023-07-18 07:15:32,915 INFO [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(3305): Received CLOSE for b3ffb5838e6e26bfc237f43e2ca098c5 2023-07-18 07:15:32,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b49f57bdede1bc5da2682c0eb9ee388b, disabling compactions & flushes 2023-07-18 07:15:32,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b. 2023-07-18 07:15:32,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b. 2023-07-18 07:15:32,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b. after waiting 0 ms 2023-07-18 07:15:32,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b. 2023-07-18 07:15:32,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing b49f57bdede1bc5da2682c0eb9ee388b 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-18 07:15:32,916 DEBUG [Listener at localhost/44381] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x12a0e162 to 127.0.0.1:57544 2023-07-18 07:15:32,916 DEBUG [Listener at localhost/44381] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:32,916 INFO [Listener at localhost/44381] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37561,1689664530143' ***** 2023-07-18 07:15:32,917 INFO [Listener at localhost/44381] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 07:15:32,917 INFO [Listener at localhost/44381] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37347,1689664530206' ***** 2023-07-18 07:15:32,917 INFO [Listener at localhost/44381] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 07:15:32,917 INFO [RS:0;jenkins-hbase4:37561] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 07:15:32,917 INFO [RS:1;jenkins-hbase4:37347] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 07:15:32,917 INFO [Listener at localhost/44381] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44321,1689664530255' ***** 2023-07-18 07:15:32,918 INFO [Listener at localhost/44381] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 07:15:32,919 INFO [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 07:15:32,927 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 07:15:32,927 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:32,928 INFO [RS:1;jenkins-hbase4:37347] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4d24713f{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:32,928 INFO [RS:0;jenkins-hbase4:37561] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@fc97682{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:32,930 INFO [RS:1;jenkins-hbase4:37347] server.AbstractConnector(383): Stopped ServerConnector@7f57ce5d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:32,930 INFO [RS:0;jenkins-hbase4:37561] server.AbstractConnector(383): Stopped ServerConnector@39054181{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:32,930 INFO [RS:2;jenkins-hbase4:44321] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3f001a9a{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:32,930 INFO [RS:0;jenkins-hbase4:37561] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 07:15:32,930 INFO [RS:1;jenkins-hbase4:37347] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 07:15:32,931 INFO [RS:0;jenkins-hbase4:37561] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@df9e121{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 07:15:32,931 INFO [RS:2;jenkins-hbase4:44321] server.AbstractConnector(383): Stopped ServerConnector@4c21c98c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:32,933 INFO [RS:1;jenkins-hbase4:37347] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@20973997{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 07:15:32,933 INFO [RS:2;jenkins-hbase4:44321] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 07:15:32,933 INFO [RS:0;jenkins-hbase4:37561] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@13121dc7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/hadoop.log.dir/,STOPPED} 2023-07-18 07:15:32,933 INFO [RS:2;jenkins-hbase4:44321] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2a3eb869{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 07:15:32,934 INFO [RS:2;jenkins-hbase4:44321] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3bece4cf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/hadoop.log.dir/,STOPPED} 2023-07-18 07:15:32,933 INFO [RS:1;jenkins-hbase4:37347] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@490fef5a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/hadoop.log.dir/,STOPPED} 2023-07-18 07:15:32,934 INFO [RS:2;jenkins-hbase4:44321] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 07:15:32,934 INFO [RS:2;jenkins-hbase4:44321] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 07:15:32,934 INFO [RS:2;jenkins-hbase4:44321] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 07:15:32,935 INFO [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(3307): Received CLOSE for the region: e58c166379c92472d3b2261a4ddc054f, which we are already trying to CLOSE, but not completed yet 2023-07-18 07:15:32,935 INFO [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(3307): Received CLOSE for the region: b3ffb5838e6e26bfc237f43e2ca098c5, which we are already trying to CLOSE, but not completed yet 2023-07-18 07:15:32,935 INFO [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:32,935 DEBUG [RS:2;jenkins-hbase4:44321] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2a416279 to 127.0.0.1:57544 2023-07-18 07:15:32,935 DEBUG [RS:2;jenkins-hbase4:44321] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:32,935 INFO [RS:2;jenkins-hbase4:44321] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 07:15:32,935 INFO [RS:2;jenkins-hbase4:44321] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 07:15:32,935 INFO [RS:2;jenkins-hbase4:44321] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 07:15:32,935 INFO [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 07:15:32,937 INFO [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-18 07:15:32,937 DEBUG [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(1478): Online Regions={b49f57bdede1bc5da2682c0eb9ee388b=hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b., 1588230740=hbase:meta,,1.1588230740, e58c166379c92472d3b2261a4ddc054f=hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f., b3ffb5838e6e26bfc237f43e2ca098c5=hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5.} 2023-07-18 07:15:32,938 INFO [RS:1;jenkins-hbase4:37347] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 07:15:32,939 DEBUG [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(1504): Waiting on 1588230740, b3ffb5838e6e26bfc237f43e2ca098c5, b49f57bdede1bc5da2682c0eb9ee388b, e58c166379c92472d3b2261a4ddc054f 2023-07-18 07:15:32,939 INFO [RS:1;jenkins-hbase4:37347] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 07:15:32,939 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 07:15:32,939 INFO [RS:0;jenkins-hbase4:37561] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 07:15:32,939 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 07:15:32,940 INFO [RS:0;jenkins-hbase4:37561] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 07:15:32,940 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 07:15:32,940 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 07:15:32,939 INFO [RS:1;jenkins-hbase4:37347] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 07:15:32,940 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 07:15:32,940 INFO [RS:0;jenkins-hbase4:37561] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 07:15:32,941 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 07:15:32,941 INFO [RS:1;jenkins-hbase4:37347] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37347,1689664530206 2023-07-18 07:15:32,941 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 07:15:32,941 DEBUG [RS:1;jenkins-hbase4:37347] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1cc7aaed to 127.0.0.1:57544 2023-07-18 07:15:32,941 DEBUG [RS:1;jenkins-hbase4:37347] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:32,941 INFO [RS:1;jenkins-hbase4:37347] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37347,1689664530206; all regions closed. 2023-07-18 07:15:32,941 DEBUG [RS:1;jenkins-hbase4:37347] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 07:15:32,941 INFO [RS:0;jenkins-hbase4:37561] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37561,1689664530143 2023-07-18 07:15:32,942 DEBUG [RS:0;jenkins-hbase4:37561] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5dbe7c66 to 127.0.0.1:57544 2023-07-18 07:15:32,942 DEBUG [RS:0;jenkins-hbase4:37561] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:32,942 INFO [RS:0;jenkins-hbase4:37561] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37561,1689664530143; all regions closed. 2023-07-18 07:15:32,942 DEBUG [RS:0;jenkins-hbase4:37561] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 07:15:32,941 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-18 07:15:32,969 DEBUG [RS:1;jenkins-hbase4:37347] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/oldWALs 2023-07-18 07:15:32,969 INFO [RS:1;jenkins-hbase4:37347] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37347%2C1689664530206:(num 1689664530760) 2023-07-18 07:15:32,969 DEBUG [RS:1;jenkins-hbase4:37347] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:32,969 INFO [RS:1;jenkins-hbase4:37347] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:32,969 INFO [RS:1;jenkins-hbase4:37347] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 07:15:32,969 DEBUG [RS:0;jenkins-hbase4:37561] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/oldWALs 2023-07-18 07:15:32,969 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 07:15:32,969 INFO [RS:1;jenkins-hbase4:37347] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 07:15:32,969 INFO [RS:0;jenkins-hbase4:37561] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37561%2C1689664530143:(num 1689664530767) 2023-07-18 07:15:32,970 INFO [RS:1;jenkins-hbase4:37347] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 07:15:32,970 INFO [RS:1;jenkins-hbase4:37347] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 07:15:32,970 DEBUG [RS:0;jenkins-hbase4:37561] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:32,970 INFO [RS:0;jenkins-hbase4:37561] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:32,971 INFO [RS:1;jenkins-hbase4:37347] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37347 2023-07-18 07:15:32,971 INFO [RS:0;jenkins-hbase4:37561] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 07:15:32,971 INFO [RS:0;jenkins-hbase4:37561] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 07:15:32,974 INFO [RS:0;jenkins-hbase4:37561] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 07:15:32,974 INFO [RS:0;jenkins-hbase4:37561] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 07:15:32,973 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 07:15:32,974 INFO [RS:0;jenkins-hbase4:37561] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37561 2023-07-18 07:15:32,977 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:37561-0x101774929760001, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37347,1689664530206 2023-07-18 07:15:32,977 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37347,1689664530206 2023-07-18 07:15:32,977 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:32,977 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:32,977 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37347,1689664530206 2023-07-18 07:15:32,978 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:32,979 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/rsgroup/b49f57bdede1bc5da2682c0eb9ee388b/.tmp/m/3eaea1f329b24f48aaf5bb08242ccdcf 2023-07-18 07:15:32,977 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:37561-0x101774929760001, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:32,979 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37561,1689664530143 2023-07-18 07:15:32,979 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37561,1689664530143 2023-07-18 07:15:32,979 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:37561-0x101774929760001, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37561,1689664530143 2023-07-18 07:15:32,980 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37561,1689664530143] 2023-07-18 07:15:32,980 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37561,1689664530143; numProcessing=1 2023-07-18 07:15:32,990 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/rsgroup/b49f57bdede1bc5da2682c0eb9ee388b/.tmp/m/3eaea1f329b24f48aaf5bb08242ccdcf as hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/rsgroup/b49f57bdede1bc5da2682c0eb9ee388b/m/3eaea1f329b24f48aaf5bb08242ccdcf 2023-07-18 07:15:32,991 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/.tmp/info/88055b1b259a4e41b639390a04abd13e 2023-07-18 07:15:32,997 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/rsgroup/b49f57bdede1bc5da2682c0eb9ee388b/m/3eaea1f329b24f48aaf5bb08242ccdcf, entries=1, sequenceid=7, filesize=4.9 K 2023-07-18 07:15:32,999 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for b49f57bdede1bc5da2682c0eb9ee388b in 83ms, sequenceid=7, compaction requested=false 2023-07-18 07:15:32,999 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 88055b1b259a4e41b639390a04abd13e 2023-07-18 07:15:32,999 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-18 07:15:33,006 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/rsgroup/b49f57bdede1bc5da2682c0eb9ee388b/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-18 07:15:33,006 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 07:15:33,006 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b. 2023-07-18 07:15:33,006 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b49f57bdede1bc5da2682c0eb9ee388b: 2023-07-18 07:15:33,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689664531105.b49f57bdede1bc5da2682c0eb9ee388b. 2023-07-18 07:15:33,009 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e58c166379c92472d3b2261a4ddc054f, disabling compactions & flushes 2023-07-18 07:15:33,009 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f. 2023-07-18 07:15:33,009 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f. 2023-07-18 07:15:33,009 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f. after waiting 0 ms 2023-07-18 07:15:33,009 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f. 2023-07-18 07:15:33,009 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e58c166379c92472d3b2261a4ddc054f 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-18 07:15:33,013 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:33,017 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:33,022 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/.tmp/rep_barrier/8ed1b80a88ae4021949ead65c7e6d9b2 2023-07-18 07:15:33,028 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/namespace/e58c166379c92472d3b2261a4ddc054f/.tmp/info/1e020bf9588d42aab9cd8d964f6bbf34 2023-07-18 07:15:33,028 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8ed1b80a88ae4021949ead65c7e6d9b2 2023-07-18 07:15:33,035 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1e020bf9588d42aab9cd8d964f6bbf34 2023-07-18 07:15:33,038 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/namespace/e58c166379c92472d3b2261a4ddc054f/.tmp/info/1e020bf9588d42aab9cd8d964f6bbf34 as hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/namespace/e58c166379c92472d3b2261a4ddc054f/info/1e020bf9588d42aab9cd8d964f6bbf34 2023-07-18 07:15:33,045 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1e020bf9588d42aab9cd8d964f6bbf34 2023-07-18 07:15:33,045 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/namespace/e58c166379c92472d3b2261a4ddc054f/info/1e020bf9588d42aab9cd8d964f6bbf34, entries=3, sequenceid=8, filesize=5.0 K 2023-07-18 07:15:33,046 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for e58c166379c92472d3b2261a4ddc054f in 37ms, sequenceid=8, compaction requested=false 2023-07-18 07:15:33,046 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-18 07:15:33,053 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/.tmp/table/9591e0b748bb417e9fdc3e2bcb8cbc33 2023-07-18 07:15:33,060 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9591e0b748bb417e9fdc3e2bcb8cbc33 2023-07-18 07:15:33,061 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/namespace/e58c166379c92472d3b2261a4ddc054f/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-18 07:15:33,061 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f. 2023-07-18 07:15:33,061 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e58c166379c92472d3b2261a4ddc054f: 2023-07-18 07:15:33,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689664531015.e58c166379c92472d3b2261a4ddc054f. 2023-07-18 07:15:33,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b3ffb5838e6e26bfc237f43e2ca098c5, disabling compactions & flushes 2023-07-18 07:15:33,062 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5. 2023-07-18 07:15:33,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5. 2023-07-18 07:15:33,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5. after waiting 0 ms 2023-07-18 07:15:33,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5. 2023-07-18 07:15:33,062 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/.tmp/info/88055b1b259a4e41b639390a04abd13e as hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/info/88055b1b259a4e41b639390a04abd13e 2023-07-18 07:15:33,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/quota/b3ffb5838e6e26bfc237f43e2ca098c5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:33,067 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5. 2023-07-18 07:15:33,067 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b3ffb5838e6e26bfc237f43e2ca098c5: 2023-07-18 07:15:33,067 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689664531450.b3ffb5838e6e26bfc237f43e2ca098c5. 2023-07-18 07:15:33,070 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 88055b1b259a4e41b639390a04abd13e 2023-07-18 07:15:33,070 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/info/88055b1b259a4e41b639390a04abd13e, entries=32, sequenceid=31, filesize=8.5 K 2023-07-18 07:15:33,071 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/.tmp/rep_barrier/8ed1b80a88ae4021949ead65c7e6d9b2 as hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/rep_barrier/8ed1b80a88ae4021949ead65c7e6d9b2 2023-07-18 07:15:33,076 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8ed1b80a88ae4021949ead65c7e6d9b2 2023-07-18 07:15:33,076 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/rep_barrier/8ed1b80a88ae4021949ead65c7e6d9b2, entries=1, sequenceid=31, filesize=4.9 K 2023-07-18 07:15:33,077 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/.tmp/table/9591e0b748bb417e9fdc3e2bcb8cbc33 as hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/table/9591e0b748bb417e9fdc3e2bcb8cbc33 2023-07-18 07:15:33,080 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:33,080 INFO [RS:1;jenkins-hbase4:37347] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37347,1689664530206; zookeeper connection closed. 2023-07-18 07:15:33,080 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:37347-0x101774929760002, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:33,082 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@11892dbf] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@11892dbf 2023-07-18 07:15:33,082 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37561,1689664530143 already deleted, retry=false 2023-07-18 07:15:33,082 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37561,1689664530143 expired; onlineServers=2 2023-07-18 07:15:33,083 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37347,1689664530206] 2023-07-18 07:15:33,083 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37347,1689664530206; numProcessing=2 2023-07-18 07:15:33,083 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9591e0b748bb417e9fdc3e2bcb8cbc33 2023-07-18 07:15:33,083 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/table/9591e0b748bb417e9fdc3e2bcb8cbc33, entries=8, sequenceid=31, filesize=5.2 K 2023-07-18 07:15:33,084 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 143ms, sequenceid=31, compaction requested=false 2023-07-18 07:15:33,084 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-18 07:15:33,084 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37347,1689664530206 already deleted, retry=false 2023-07-18 07:15:33,084 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37347,1689664530206 expired; onlineServers=1 2023-07-18 07:15:33,098 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-18 07:15:33,099 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 07:15:33,099 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 07:15:33,099 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 07:15:33,099 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 07:15:33,112 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:37561-0x101774929760001, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:33,112 INFO [RS:0;jenkins-hbase4:37561] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37561,1689664530143; zookeeper connection closed. 2023-07-18 07:15:33,112 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:37561-0x101774929760001, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:33,113 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@815f612] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@815f612 2023-07-18 07:15:33,139 INFO [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44321,1689664530255; all regions closed. 2023-07-18 07:15:33,140 DEBUG [RS:2;jenkins-hbase4:44321] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-18 07:15:33,146 DEBUG [RS:2;jenkins-hbase4:44321] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/oldWALs 2023-07-18 07:15:33,146 INFO [RS:2;jenkins-hbase4:44321] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44321%2C1689664530255.meta:.meta(num 1689664530961) 2023-07-18 07:15:33,152 DEBUG [RS:2;jenkins-hbase4:44321] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/oldWALs 2023-07-18 07:15:33,152 INFO [RS:2;jenkins-hbase4:44321] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44321%2C1689664530255:(num 1689664530767) 2023-07-18 07:15:33,152 DEBUG [RS:2;jenkins-hbase4:44321] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:33,152 INFO [RS:2;jenkins-hbase4:44321] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:33,152 INFO [RS:2;jenkins-hbase4:44321] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 07:15:33,152 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 07:15:33,153 INFO [RS:2;jenkins-hbase4:44321] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44321 2023-07-18 07:15:33,156 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44321,1689664530255 2023-07-18 07:15:33,156 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:33,157 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44321,1689664530255] 2023-07-18 07:15:33,157 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44321,1689664530255; numProcessing=3 2023-07-18 07:15:33,158 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44321,1689664530255 already deleted, retry=false 2023-07-18 07:15:33,158 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44321,1689664530255 expired; onlineServers=0 2023-07-18 07:15:33,158 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45091,1689664530065' ***** 2023-07-18 07:15:33,158 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 07:15:33,159 DEBUG [M:0;jenkins-hbase4:45091] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@32ba47a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 07:15:33,159 INFO [M:0;jenkins-hbase4:45091] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 07:15:33,161 INFO [M:0;jenkins-hbase4:45091] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@39b24fff{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-18 07:15:33,161 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 07:15:33,161 INFO [M:0;jenkins-hbase4:45091] server.AbstractConnector(383): Stopped ServerConnector@6a76b1f2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:33,161 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:33,161 INFO [M:0;jenkins-hbase4:45091] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 07:15:33,161 INFO [M:0;jenkins-hbase4:45091] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1839aa60{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 07:15:33,161 INFO [M:0;jenkins-hbase4:45091] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@71745bed{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/hadoop.log.dir/,STOPPED} 2023-07-18 07:15:33,161 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 07:15:33,162 INFO [M:0;jenkins-hbase4:45091] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45091,1689664530065 2023-07-18 07:15:33,162 INFO [M:0;jenkins-hbase4:45091] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45091,1689664530065; all regions closed. 2023-07-18 07:15:33,162 DEBUG [M:0;jenkins-hbase4:45091] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:33,162 INFO [M:0;jenkins-hbase4:45091] master.HMaster(1491): Stopping master jetty server 2023-07-18 07:15:33,162 INFO [M:0;jenkins-hbase4:45091] server.AbstractConnector(383): Stopped ServerConnector@50abe707{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:33,163 DEBUG [M:0;jenkins-hbase4:45091] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 07:15:33,163 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 07:15:33,163 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689664530548] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689664530548,5,FailOnTimeoutGroup] 2023-07-18 07:15:33,163 DEBUG [M:0;jenkins-hbase4:45091] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 07:15:33,163 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689664530547] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689664530547,5,FailOnTimeoutGroup] 2023-07-18 07:15:33,164 INFO [M:0;jenkins-hbase4:45091] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 07:15:33,164 INFO [M:0;jenkins-hbase4:45091] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 07:15:33,164 INFO [M:0;jenkins-hbase4:45091] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 07:15:33,165 DEBUG [M:0;jenkins-hbase4:45091] master.HMaster(1512): Stopping service threads 2023-07-18 07:15:33,165 INFO [M:0;jenkins-hbase4:45091] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 07:15:33,165 ERROR [M:0;jenkins-hbase4:45091] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-18 07:15:33,165 INFO [M:0;jenkins-hbase4:45091] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 07:15:33,165 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 07:15:33,166 DEBUG [M:0;jenkins-hbase4:45091] zookeeper.ZKUtil(398): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 07:15:33,166 WARN [M:0;jenkins-hbase4:45091] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 07:15:33,166 INFO [M:0;jenkins-hbase4:45091] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 07:15:33,166 INFO [M:0;jenkins-hbase4:45091] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 07:15:33,166 DEBUG [M:0;jenkins-hbase4:45091] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 07:15:33,166 INFO [M:0;jenkins-hbase4:45091] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:33,166 DEBUG [M:0;jenkins-hbase4:45091] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:33,166 DEBUG [M:0;jenkins-hbase4:45091] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 07:15:33,166 DEBUG [M:0;jenkins-hbase4:45091] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:33,166 INFO [M:0;jenkins-hbase4:45091] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.96 KB heapSize=109.12 KB 2023-07-18 07:15:33,181 INFO [M:0;jenkins-hbase4:45091] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.96 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/5808a2c3f1fc4be3a3ab2f0f63a76bb0 2023-07-18 07:15:33,186 DEBUG [M:0;jenkins-hbase4:45091] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/5808a2c3f1fc4be3a3ab2f0f63a76bb0 as hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/5808a2c3f1fc4be3a3ab2f0f63a76bb0 2023-07-18 07:15:33,191 INFO [M:0;jenkins-hbase4:45091] regionserver.HStore(1080): Added hdfs://localhost:39713/user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/5808a2c3f1fc4be3a3ab2f0f63a76bb0, entries=24, sequenceid=194, filesize=12.4 K 2023-07-18 07:15:33,192 INFO [M:0;jenkins-hbase4:45091] regionserver.HRegion(2948): Finished flush of dataSize ~92.96 KB/95191, heapSize ~109.10 KB/111720, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 26ms, sequenceid=194, compaction requested=false 2023-07-18 07:15:33,194 INFO [M:0;jenkins-hbase4:45091] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:33,194 DEBUG [M:0;jenkins-hbase4:45091] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 07:15:33,197 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/120efffd-6d0a-01b5-e937-4e7812e85250/MasterData/WALs/jenkins-hbase4.apache.org,45091,1689664530065/jenkins-hbase4.apache.org%2C45091%2C1689664530065.1689664530373 not finished, retry = 0 2023-07-18 07:15:33,298 INFO [M:0;jenkins-hbase4:45091] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 07:15:33,298 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 07:15:33,299 INFO [M:0;jenkins-hbase4:45091] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45091 2023-07-18 07:15:33,301 DEBUG [M:0;jenkins-hbase4:45091] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,45091,1689664530065 already deleted, retry=false 2023-07-18 07:15:33,613 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:33,613 INFO [M:0;jenkins-hbase4:45091] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45091,1689664530065; zookeeper connection closed. 2023-07-18 07:15:33,613 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): master:45091-0x101774929760000, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:33,713 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:33,713 INFO [RS:2;jenkins-hbase4:44321] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44321,1689664530255; zookeeper connection closed. 2023-07-18 07:15:33,713 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): regionserver:44321-0x101774929760003, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:33,714 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1bdb4756] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1bdb4756 2023-07-18 07:15:33,714 INFO [Listener at localhost/44381] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-18 07:15:33,714 WARN [Listener at localhost/44381] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 07:15:33,718 INFO [Listener at localhost/44381] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 07:15:33,825 WARN [BP-1080368630-172.31.14.131-1689664529157 heartbeating to localhost/127.0.0.1:39713] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 07:15:33,825 WARN [BP-1080368630-172.31.14.131-1689664529157 heartbeating to localhost/127.0.0.1:39713] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1080368630-172.31.14.131-1689664529157 (Datanode Uuid 466feb48-3a2c-4ded-9833-34d2037c2ee0) service to localhost/127.0.0.1:39713 2023-07-18 07:15:33,825 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/cluster_67a26294-75e5-871c-79cb-8d23fdc88534/dfs/data/data5/current/BP-1080368630-172.31.14.131-1689664529157] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:33,826 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/cluster_67a26294-75e5-871c-79cb-8d23fdc88534/dfs/data/data6/current/BP-1080368630-172.31.14.131-1689664529157] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:33,829 WARN [Listener at localhost/44381] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 07:15:33,832 INFO [Listener at localhost/44381] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 07:15:33,936 WARN [BP-1080368630-172.31.14.131-1689664529157 heartbeating to localhost/127.0.0.1:39713] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 07:15:33,936 WARN [BP-1080368630-172.31.14.131-1689664529157 heartbeating to localhost/127.0.0.1:39713] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1080368630-172.31.14.131-1689664529157 (Datanode Uuid 57602f4b-0757-44b1-bc18-17a87fd5a918) service to localhost/127.0.0.1:39713 2023-07-18 07:15:33,937 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/cluster_67a26294-75e5-871c-79cb-8d23fdc88534/dfs/data/data3/current/BP-1080368630-172.31.14.131-1689664529157] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:33,937 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/cluster_67a26294-75e5-871c-79cb-8d23fdc88534/dfs/data/data4/current/BP-1080368630-172.31.14.131-1689664529157] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:33,938 WARN [Listener at localhost/44381] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 07:15:33,941 INFO [Listener at localhost/44381] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 07:15:34,045 WARN [BP-1080368630-172.31.14.131-1689664529157 heartbeating to localhost/127.0.0.1:39713] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 07:15:34,045 WARN [BP-1080368630-172.31.14.131-1689664529157 heartbeating to localhost/127.0.0.1:39713] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1080368630-172.31.14.131-1689664529157 (Datanode Uuid e33ad5f5-f0af-40f6-ad47-609c0b54b740) service to localhost/127.0.0.1:39713 2023-07-18 07:15:34,046 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/cluster_67a26294-75e5-871c-79cb-8d23fdc88534/dfs/data/data1/current/BP-1080368630-172.31.14.131-1689664529157] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:34,046 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/cluster_67a26294-75e5-871c-79cb-8d23fdc88534/dfs/data/data2/current/BP-1080368630-172.31.14.131-1689664529157] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:34,059 INFO [Listener at localhost/44381] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 07:15:34,181 INFO [Listener at localhost/44381] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 07:15:34,210 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-18 07:15:34,210 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-18 07:15:34,210 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/hadoop.log.dir so I do NOT create it in target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5 2023-07-18 07:15:34,210 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c2cba87c-9cd2-05e7-cde7-31589a9f5cfe/hadoop.tmp.dir so I do NOT create it in target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5 2023-07-18 07:15:34,210 INFO [Listener at localhost/44381] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3, deleteOnExit=true 2023-07-18 07:15:34,210 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-18 07:15:34,211 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/test.cache.data in system properties and HBase conf 2023-07-18 07:15:34,211 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/hadoop.tmp.dir in system properties and HBase conf 2023-07-18 07:15:34,211 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/hadoop.log.dir in system properties and HBase conf 2023-07-18 07:15:34,211 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-18 07:15:34,211 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-18 07:15:34,211 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-18 07:15:34,211 DEBUG [Listener at localhost/44381] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-18 07:15:34,212 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-18 07:15:34,212 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-18 07:15:34,212 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-18 07:15:34,212 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 07:15:34,212 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-18 07:15:34,212 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-18 07:15:34,212 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-18 07:15:34,213 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 07:15:34,213 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-18 07:15:34,213 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/nfs.dump.dir in system properties and HBase conf 2023-07-18 07:15:34,213 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/java.io.tmpdir in system properties and HBase conf 2023-07-18 07:15:34,213 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-18 07:15:34,213 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-18 07:15:34,213 INFO [Listener at localhost/44381] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-18 07:15:34,217 WARN [Listener at localhost/44381] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 07:15:34,217 WARN [Listener at localhost/44381] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 07:15:34,259 WARN [Listener at localhost/44381] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 07:15:34,262 INFO [Listener at localhost/44381] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 07:15:34,270 INFO [Listener at localhost/44381] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/java.io.tmpdir/Jetty_localhost_40561_hdfs____.bd7sf6/webapp 2023-07-18 07:15:34,279 DEBUG [Listener at localhost/44381-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10177492976000a, quorum=127.0.0.1:57544, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-18 07:15:34,279 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10177492976000a, quorum=127.0.0.1:57544, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-18 07:15:34,367 INFO [Listener at localhost/44381] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40561 2023-07-18 07:15:34,371 WARN [Listener at localhost/44381] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-18 07:15:34,371 WARN [Listener at localhost/44381] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-18 07:15:34,430 WARN [Listener at localhost/43393] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 07:15:34,445 WARN [Listener at localhost/43393] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 07:15:34,448 WARN [Listener at localhost/43393] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 07:15:34,449 INFO [Listener at localhost/43393] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 07:15:34,454 INFO [Listener at localhost/43393] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/java.io.tmpdir/Jetty_localhost_36477_datanode____id4e9k/webapp 2023-07-18 07:15:34,546 INFO [Listener at localhost/43393] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36477 2023-07-18 07:15:34,554 WARN [Listener at localhost/44453] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 07:15:34,569 WARN [Listener at localhost/44453] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 07:15:34,571 WARN [Listener at localhost/44453] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 07:15:34,572 INFO [Listener at localhost/44453] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 07:15:34,575 INFO [Listener at localhost/44453] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/java.io.tmpdir/Jetty_localhost_43095_datanode____uczefo/webapp 2023-07-18 07:15:34,668 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe3dd0e1b171856c8: Processing first storage report for DS-1be37284-7d24-4a11-875d-bd50b01a56c1 from datanode c14261ca-5184-4d4b-b2c9-7863ce20aa30 2023-07-18 07:15:34,668 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe3dd0e1b171856c8: from storage DS-1be37284-7d24-4a11-875d-bd50b01a56c1 node DatanodeRegistration(127.0.0.1:44193, datanodeUuid=c14261ca-5184-4d4b-b2c9-7863ce20aa30, infoPort=37851, infoSecurePort=0, ipcPort=44453, storageInfo=lv=-57;cid=testClusterID;nsid=1515925254;c=1689664534220), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 07:15:34,668 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe3dd0e1b171856c8: Processing first storage report for DS-83331746-c386-4cd2-aca1-4d0feb47e4be from datanode c14261ca-5184-4d4b-b2c9-7863ce20aa30 2023-07-18 07:15:34,668 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe3dd0e1b171856c8: from storage DS-83331746-c386-4cd2-aca1-4d0feb47e4be node DatanodeRegistration(127.0.0.1:44193, datanodeUuid=c14261ca-5184-4d4b-b2c9-7863ce20aa30, infoPort=37851, infoSecurePort=0, ipcPort=44453, storageInfo=lv=-57;cid=testClusterID;nsid=1515925254;c=1689664534220), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 07:15:34,681 INFO [Listener at localhost/44453] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43095 2023-07-18 07:15:34,688 WARN [Listener at localhost/40827] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 07:15:34,705 WARN [Listener at localhost/40827] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-18 07:15:34,709 WARN [Listener at localhost/40827] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-18 07:15:34,710 INFO [Listener at localhost/40827] log.Slf4jLog(67): jetty-6.1.26 2023-07-18 07:15:34,718 INFO [Listener at localhost/40827] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/java.io.tmpdir/Jetty_localhost_44213_datanode____6b97az/webapp 2023-07-18 07:15:34,780 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc74eb14ddfe40b9f: Processing first storage report for DS-971e1925-2fd4-4c39-be24-7f9fcf0a5b47 from datanode bacb708d-4601-42fb-b23d-abc95459762c 2023-07-18 07:15:34,780 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc74eb14ddfe40b9f: from storage DS-971e1925-2fd4-4c39-be24-7f9fcf0a5b47 node DatanodeRegistration(127.0.0.1:45913, datanodeUuid=bacb708d-4601-42fb-b23d-abc95459762c, infoPort=37909, infoSecurePort=0, ipcPort=40827, storageInfo=lv=-57;cid=testClusterID;nsid=1515925254;c=1689664534220), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 07:15:34,780 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc74eb14ddfe40b9f: Processing first storage report for DS-e2ff3b8d-0fd8-40b0-8199-48b7ba929228 from datanode bacb708d-4601-42fb-b23d-abc95459762c 2023-07-18 07:15:34,780 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc74eb14ddfe40b9f: from storage DS-e2ff3b8d-0fd8-40b0-8199-48b7ba929228 node DatanodeRegistration(127.0.0.1:45913, datanodeUuid=bacb708d-4601-42fb-b23d-abc95459762c, infoPort=37909, infoSecurePort=0, ipcPort=40827, storageInfo=lv=-57;cid=testClusterID;nsid=1515925254;c=1689664534220), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-18 07:15:34,819 INFO [Listener at localhost/40827] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44213 2023-07-18 07:15:34,826 WARN [Listener at localhost/36955] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-18 07:15:34,916 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9c118d6e2413c9ba: Processing first storage report for DS-41b8dfa2-90aa-4f84-b2a4-952d5216a8d5 from datanode 680555c7-f1d3-4e78-a16f-265f6a6c804b 2023-07-18 07:15:34,916 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9c118d6e2413c9ba: from storage DS-41b8dfa2-90aa-4f84-b2a4-952d5216a8d5 node DatanodeRegistration(127.0.0.1:38199, datanodeUuid=680555c7-f1d3-4e78-a16f-265f6a6c804b, infoPort=45745, infoSecurePort=0, ipcPort=36955, storageInfo=lv=-57;cid=testClusterID;nsid=1515925254;c=1689664534220), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-18 07:15:34,917 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9c118d6e2413c9ba: Processing first storage report for DS-044447a2-1629-453d-a5c5-2dfedad29c11 from datanode 680555c7-f1d3-4e78-a16f-265f6a6c804b 2023-07-18 07:15:34,917 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9c118d6e2413c9ba: from storage DS-044447a2-1629-453d-a5c5-2dfedad29c11 node DatanodeRegistration(127.0.0.1:38199, datanodeUuid=680555c7-f1d3-4e78-a16f-265f6a6c804b, infoPort=45745, infoSecurePort=0, ipcPort=36955, storageInfo=lv=-57;cid=testClusterID;nsid=1515925254;c=1689664534220), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-18 07:15:34,937 DEBUG [Listener at localhost/36955] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5 2023-07-18 07:15:34,939 INFO [Listener at localhost/36955] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/zookeeper_0, clientPort=63390, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-18 07:15:34,940 INFO [Listener at localhost/36955] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=63390 2023-07-18 07:15:34,940 INFO [Listener at localhost/36955] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:34,941 INFO [Listener at localhost/36955] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:34,955 INFO [Listener at localhost/36955] util.FSUtils(471): Created version file at hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d with version=8 2023-07-18 07:15:34,956 INFO [Listener at localhost/36955] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:42711/user/jenkins/test-data/5a5c5df8-0ea9-f881-a4fd-b917f091e4c9/hbase-staging 2023-07-18 07:15:34,957 DEBUG [Listener at localhost/36955] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-18 07:15:34,957 DEBUG [Listener at localhost/36955] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-18 07:15:34,957 DEBUG [Listener at localhost/36955] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-18 07:15:34,957 DEBUG [Listener at localhost/36955] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-18 07:15:34,958 INFO [Listener at localhost/36955] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 07:15:34,958 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:34,958 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:34,958 INFO [Listener at localhost/36955] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 07:15:34,958 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:34,959 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 07:15:34,959 INFO [Listener at localhost/36955] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 07:15:34,959 INFO [Listener at localhost/36955] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38555 2023-07-18 07:15:34,960 INFO [Listener at localhost/36955] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:34,961 INFO [Listener at localhost/36955] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:34,962 INFO [Listener at localhost/36955] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38555 connecting to ZooKeeper ensemble=127.0.0.1:63390 2023-07-18 07:15:34,971 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:385550x0, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:34,971 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38555-0x10177493c9e0000 connected 2023-07-18 07:15:34,988 DEBUG [Listener at localhost/36955] zookeeper.ZKUtil(164): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 07:15:34,988 DEBUG [Listener at localhost/36955] zookeeper.ZKUtil(164): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:34,989 DEBUG [Listener at localhost/36955] zookeeper.ZKUtil(164): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 07:15:34,990 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38555 2023-07-18 07:15:34,990 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38555 2023-07-18 07:15:34,991 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38555 2023-07-18 07:15:34,991 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38555 2023-07-18 07:15:34,991 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38555 2023-07-18 07:15:34,993 INFO [Listener at localhost/36955] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 07:15:34,993 INFO [Listener at localhost/36955] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 07:15:34,993 INFO [Listener at localhost/36955] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 07:15:34,994 INFO [Listener at localhost/36955] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-18 07:15:34,994 INFO [Listener at localhost/36955] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 07:15:34,994 INFO [Listener at localhost/36955] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 07:15:34,994 INFO [Listener at localhost/36955] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 07:15:34,994 INFO [Listener at localhost/36955] http.HttpServer(1146): Jetty bound to port 44151 2023-07-18 07:15:34,995 INFO [Listener at localhost/36955] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:34,996 INFO [Listener at localhost/36955] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:34,996 INFO [Listener at localhost/36955] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@8c04e09{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/hadoop.log.dir/,AVAILABLE} 2023-07-18 07:15:34,996 INFO [Listener at localhost/36955] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:34,996 INFO [Listener at localhost/36955] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4ae9f274{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 07:15:35,003 INFO [Listener at localhost/36955] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 07:15:35,004 INFO [Listener at localhost/36955] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 07:15:35,004 INFO [Listener at localhost/36955] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 07:15:35,005 INFO [Listener at localhost/36955] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 07:15:35,006 INFO [Listener at localhost/36955] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:35,007 INFO [Listener at localhost/36955] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1cb67a7f{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-18 07:15:35,008 INFO [Listener at localhost/36955] server.AbstractConnector(333): Started ServerConnector@7dfb63db{HTTP/1.1, (http/1.1)}{0.0.0.0:44151} 2023-07-18 07:15:35,008 INFO [Listener at localhost/36955] server.Server(415): Started @42576ms 2023-07-18 07:15:35,008 INFO [Listener at localhost/36955] master.HMaster(444): hbase.rootdir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d, hbase.cluster.distributed=false 2023-07-18 07:15:35,023 INFO [Listener at localhost/36955] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 07:15:35,023 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:35,023 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:35,023 INFO [Listener at localhost/36955] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 07:15:35,023 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:35,023 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 07:15:35,024 INFO [Listener at localhost/36955] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 07:15:35,026 INFO [Listener at localhost/36955] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43187 2023-07-18 07:15:35,026 INFO [Listener at localhost/36955] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 07:15:35,027 DEBUG [Listener at localhost/36955] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 07:15:35,028 INFO [Listener at localhost/36955] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:35,029 INFO [Listener at localhost/36955] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:35,029 INFO [Listener at localhost/36955] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43187 connecting to ZooKeeper ensemble=127.0.0.1:63390 2023-07-18 07:15:35,034 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:431870x0, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:35,035 DEBUG [Listener at localhost/36955] zookeeper.ZKUtil(164): regionserver:431870x0, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 07:15:35,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43187-0x10177493c9e0001 connected 2023-07-18 07:15:35,036 DEBUG [Listener at localhost/36955] zookeeper.ZKUtil(164): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:35,036 DEBUG [Listener at localhost/36955] zookeeper.ZKUtil(164): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 07:15:35,037 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43187 2023-07-18 07:15:35,037 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43187 2023-07-18 07:15:35,040 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43187 2023-07-18 07:15:35,041 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43187 2023-07-18 07:15:35,041 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43187 2023-07-18 07:15:35,043 INFO [Listener at localhost/36955] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 07:15:35,043 INFO [Listener at localhost/36955] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 07:15:35,043 INFO [Listener at localhost/36955] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 07:15:35,044 INFO [Listener at localhost/36955] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 07:15:35,044 INFO [Listener at localhost/36955] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 07:15:35,044 INFO [Listener at localhost/36955] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 07:15:35,044 INFO [Listener at localhost/36955] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 07:15:35,044 INFO [Listener at localhost/36955] http.HttpServer(1146): Jetty bound to port 37711 2023-07-18 07:15:35,045 INFO [Listener at localhost/36955] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:35,047 INFO [Listener at localhost/36955] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:35,048 INFO [Listener at localhost/36955] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1e653533{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/hadoop.log.dir/,AVAILABLE} 2023-07-18 07:15:35,048 INFO [Listener at localhost/36955] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:35,048 INFO [Listener at localhost/36955] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3d1f03cf{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 07:15:35,054 INFO [Listener at localhost/36955] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 07:15:35,055 INFO [Listener at localhost/36955] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 07:15:35,055 INFO [Listener at localhost/36955] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 07:15:35,056 INFO [Listener at localhost/36955] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 07:15:35,056 INFO [Listener at localhost/36955] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:35,057 INFO [Listener at localhost/36955] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@59d67bc5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:35,058 INFO [Listener at localhost/36955] server.AbstractConnector(333): Started ServerConnector@4e213774{HTTP/1.1, (http/1.1)}{0.0.0.0:37711} 2023-07-18 07:15:35,058 INFO [Listener at localhost/36955] server.Server(415): Started @42626ms 2023-07-18 07:15:35,069 INFO [Listener at localhost/36955] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 07:15:35,069 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:35,069 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:35,069 INFO [Listener at localhost/36955] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 07:15:35,070 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:35,070 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 07:15:35,070 INFO [Listener at localhost/36955] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 07:15:35,070 INFO [Listener at localhost/36955] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40483 2023-07-18 07:15:35,071 INFO [Listener at localhost/36955] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 07:15:35,072 DEBUG [Listener at localhost/36955] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 07:15:35,072 INFO [Listener at localhost/36955] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:35,073 INFO [Listener at localhost/36955] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:35,074 INFO [Listener at localhost/36955] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40483 connecting to ZooKeeper ensemble=127.0.0.1:63390 2023-07-18 07:15:35,077 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:404830x0, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:35,079 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40483-0x10177493c9e0002 connected 2023-07-18 07:15:35,079 DEBUG [Listener at localhost/36955] zookeeper.ZKUtil(164): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 07:15:35,079 DEBUG [Listener at localhost/36955] zookeeper.ZKUtil(164): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:35,080 DEBUG [Listener at localhost/36955] zookeeper.ZKUtil(164): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 07:15:35,081 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40483 2023-07-18 07:15:35,082 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40483 2023-07-18 07:15:35,086 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40483 2023-07-18 07:15:35,087 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40483 2023-07-18 07:15:35,087 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40483 2023-07-18 07:15:35,089 INFO [Listener at localhost/36955] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 07:15:35,089 INFO [Listener at localhost/36955] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 07:15:35,089 INFO [Listener at localhost/36955] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 07:15:35,089 INFO [Listener at localhost/36955] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 07:15:35,089 INFO [Listener at localhost/36955] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 07:15:35,089 INFO [Listener at localhost/36955] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 07:15:35,090 INFO [Listener at localhost/36955] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 07:15:35,090 INFO [Listener at localhost/36955] http.HttpServer(1146): Jetty bound to port 42051 2023-07-18 07:15:35,090 INFO [Listener at localhost/36955] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:35,091 INFO [Listener at localhost/36955] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:35,092 INFO [Listener at localhost/36955] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@115f03f4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/hadoop.log.dir/,AVAILABLE} 2023-07-18 07:15:35,092 INFO [Listener at localhost/36955] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:35,092 INFO [Listener at localhost/36955] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1e4c685d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 07:15:35,096 INFO [Listener at localhost/36955] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 07:15:35,097 INFO [Listener at localhost/36955] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 07:15:35,097 INFO [Listener at localhost/36955] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 07:15:35,097 INFO [Listener at localhost/36955] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 07:15:35,099 INFO [Listener at localhost/36955] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:35,100 INFO [Listener at localhost/36955] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6cbf8af{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:35,102 INFO [Listener at localhost/36955] server.AbstractConnector(333): Started ServerConnector@3e29d14d{HTTP/1.1, (http/1.1)}{0.0.0.0:42051} 2023-07-18 07:15:35,102 INFO [Listener at localhost/36955] server.Server(415): Started @42670ms 2023-07-18 07:15:35,113 INFO [Listener at localhost/36955] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 07:15:35,113 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:35,113 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:35,113 INFO [Listener at localhost/36955] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 07:15:35,113 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:35,113 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 07:15:35,113 INFO [Listener at localhost/36955] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 07:15:35,115 INFO [Listener at localhost/36955] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34145 2023-07-18 07:15:35,115 INFO [Listener at localhost/36955] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 07:15:35,116 DEBUG [Listener at localhost/36955] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 07:15:35,117 INFO [Listener at localhost/36955] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:35,117 INFO [Listener at localhost/36955] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:35,118 INFO [Listener at localhost/36955] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34145 connecting to ZooKeeper ensemble=127.0.0.1:63390 2023-07-18 07:15:35,122 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:341450x0, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:35,123 DEBUG [Listener at localhost/36955] zookeeper.ZKUtil(164): regionserver:341450x0, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 07:15:35,123 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34145-0x10177493c9e0003 connected 2023-07-18 07:15:35,124 DEBUG [Listener at localhost/36955] zookeeper.ZKUtil(164): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:35,124 DEBUG [Listener at localhost/36955] zookeeper.ZKUtil(164): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 07:15:35,126 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34145 2023-07-18 07:15:35,127 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34145 2023-07-18 07:15:35,127 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34145 2023-07-18 07:15:35,128 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34145 2023-07-18 07:15:35,128 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34145 2023-07-18 07:15:35,130 INFO [Listener at localhost/36955] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 07:15:35,130 INFO [Listener at localhost/36955] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 07:15:35,130 INFO [Listener at localhost/36955] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 07:15:35,131 INFO [Listener at localhost/36955] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 07:15:35,131 INFO [Listener at localhost/36955] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 07:15:35,131 INFO [Listener at localhost/36955] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 07:15:35,131 INFO [Listener at localhost/36955] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 07:15:35,131 INFO [Listener at localhost/36955] http.HttpServer(1146): Jetty bound to port 44157 2023-07-18 07:15:35,131 INFO [Listener at localhost/36955] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:35,135 INFO [Listener at localhost/36955] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:35,135 INFO [Listener at localhost/36955] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3d7ca1bf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/hadoop.log.dir/,AVAILABLE} 2023-07-18 07:15:35,135 INFO [Listener at localhost/36955] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:35,135 INFO [Listener at localhost/36955] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5c8bb404{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 07:15:35,140 INFO [Listener at localhost/36955] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 07:15:35,140 INFO [Listener at localhost/36955] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 07:15:35,140 INFO [Listener at localhost/36955] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 07:15:35,140 INFO [Listener at localhost/36955] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-18 07:15:35,141 INFO [Listener at localhost/36955] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:35,142 INFO [Listener at localhost/36955] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2a31ff28{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:35,143 INFO [Listener at localhost/36955] server.AbstractConnector(333): Started ServerConnector@28b8c4bb{HTTP/1.1, (http/1.1)}{0.0.0.0:44157} 2023-07-18 07:15:35,143 INFO [Listener at localhost/36955] server.Server(415): Started @42711ms 2023-07-18 07:15:35,148 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:35,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@76b45728{HTTP/1.1, (http/1.1)}{0.0.0.0:33167} 2023-07-18 07:15:35,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @42721ms 2023-07-18 07:15:35,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,38555,1689664534957 2023-07-18 07:15:35,155 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 07:15:35,155 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,38555,1689664534957 2023-07-18 07:15:35,157 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 07:15:35,157 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 07:15:35,157 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 07:15:35,157 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-18 07:15:35,158 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:35,159 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 07:15:35,160 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,38555,1689664534957 from backup master directory 2023-07-18 07:15:35,160 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 07:15:35,161 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,38555,1689664534957 2023-07-18 07:15:35,161 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-18 07:15:35,161 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 07:15:35,162 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,38555,1689664534957 2023-07-18 07:15:35,178 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/hbase.id with ID: 7963786a-d166-4f0b-9db9-0a27708879e8 2023-07-18 07:15:35,193 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:35,196 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:35,206 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x43c2f650 to 127.0.0.1:63390 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:35,210 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25958f5a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:35,211 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:35,211 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-18 07:15:35,211 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:35,213 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/MasterData/data/master/store-tmp 2023-07-18 07:15:35,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:35,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 07:15:35,224 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:35,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:35,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 07:15:35,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:35,224 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:35,224 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 07:15:35,225 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/MasterData/WALs/jenkins-hbase4.apache.org,38555,1689664534957 2023-07-18 07:15:35,228 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38555%2C1689664534957, suffix=, logDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/MasterData/WALs/jenkins-hbase4.apache.org,38555,1689664534957, archiveDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/MasterData/oldWALs, maxLogs=10 2023-07-18 07:15:35,243 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45913,DS-971e1925-2fd4-4c39-be24-7f9fcf0a5b47,DISK] 2023-07-18 07:15:35,243 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38199,DS-41b8dfa2-90aa-4f84-b2a4-952d5216a8d5,DISK] 2023-07-18 07:15:35,244 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44193,DS-1be37284-7d24-4a11-875d-bd50b01a56c1,DISK] 2023-07-18 07:15:35,250 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/MasterData/WALs/jenkins-hbase4.apache.org,38555,1689664534957/jenkins-hbase4.apache.org%2C38555%2C1689664534957.1689664535228 2023-07-18 07:15:35,250 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45913,DS-971e1925-2fd4-4c39-be24-7f9fcf0a5b47,DISK], DatanodeInfoWithStorage[127.0.0.1:44193,DS-1be37284-7d24-4a11-875d-bd50b01a56c1,DISK], DatanodeInfoWithStorage[127.0.0.1:38199,DS-41b8dfa2-90aa-4f84-b2a4-952d5216a8d5,DISK]] 2023-07-18 07:15:35,250 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:35,250 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:35,250 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:35,250 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:35,254 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:35,255 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-18 07:15:35,255 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-18 07:15:35,256 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:35,256 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:35,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:35,259 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-18 07:15:35,263 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:35,263 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10236294720, jitterRate=-0.046670764684677124}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:35,263 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 07:15:35,264 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-18 07:15:35,265 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-18 07:15:35,265 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-18 07:15:35,265 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-18 07:15:35,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-18 07:15:35,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-18 07:15:35,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-18 07:15:35,270 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-18 07:15:35,271 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-18 07:15:35,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-18 07:15:35,272 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-18 07:15:35,272 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-18 07:15:35,277 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:35,277 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-18 07:15:35,278 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-18 07:15:35,279 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-18 07:15:35,280 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:35,280 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:35,280 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:35,280 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:35,280 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:35,284 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,38555,1689664534957, sessionid=0x10177493c9e0000, setting cluster-up flag (Was=false) 2023-07-18 07:15:35,286 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:35,291 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-18 07:15:35,292 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38555,1689664534957 2023-07-18 07:15:35,295 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:35,299 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-18 07:15:35,300 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38555,1689664534957 2023-07-18 07:15:35,300 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.hbase-snapshot/.tmp 2023-07-18 07:15:35,301 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-18 07:15:35,301 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-18 07:15:35,302 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-18 07:15:35,303 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38555,1689664534957] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 07:15:35,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-18 07:15:35,304 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-18 07:15:35,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 07:15:35,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 07:15:35,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-18 07:15:35,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-18 07:15:35,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 07:15:35,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 07:15:35,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 07:15:35,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-18 07:15:35,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-18 07:15:35,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 07:15:35,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,317 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689664565317 2023-07-18 07:15:35,318 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-18 07:15:35,318 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-18 07:15:35,318 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-18 07:15:35,318 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-18 07:15:35,318 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-18 07:15:35,318 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-18 07:15:35,318 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,318 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 07:15:35,318 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-18 07:15:35,319 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-18 07:15:35,319 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-18 07:15:35,319 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-18 07:15:35,320 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:35,320 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-18 07:15:35,320 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-18 07:15:35,321 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689664535320,5,FailOnTimeoutGroup] 2023-07-18 07:15:35,321 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689664535321,5,FailOnTimeoutGroup] 2023-07-18 07:15:35,321 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,321 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-18 07:15:35,321 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,321 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,335 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:35,336 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:35,336 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d 2023-07-18 07:15:35,350 INFO [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(951): ClusterId : 7963786a-d166-4f0b-9db9-0a27708879e8 2023-07-18 07:15:35,350 INFO [RS:1;jenkins-hbase4:40483] regionserver.HRegionServer(951): ClusterId : 7963786a-d166-4f0b-9db9-0a27708879e8 2023-07-18 07:15:35,351 DEBUG [RS:0;jenkins-hbase4:43187] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 07:15:35,352 DEBUG [RS:1;jenkins-hbase4:40483] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 07:15:35,351 INFO [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer(951): ClusterId : 7963786a-d166-4f0b-9db9-0a27708879e8 2023-07-18 07:15:35,354 DEBUG [RS:2;jenkins-hbase4:34145] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 07:15:35,357 DEBUG [RS:1;jenkins-hbase4:40483] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 07:15:35,357 DEBUG [RS:0;jenkins-hbase4:43187] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 07:15:35,357 DEBUG [RS:0;jenkins-hbase4:43187] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 07:15:35,357 DEBUG [RS:1;jenkins-hbase4:40483] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 07:15:35,359 DEBUG [RS:1;jenkins-hbase4:40483] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 07:15:35,359 DEBUG [RS:0;jenkins-hbase4:43187] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 07:15:35,362 DEBUG [RS:1;jenkins-hbase4:40483] zookeeper.ReadOnlyZKClient(139): Connect 0x21bb2893 to 127.0.0.1:63390 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:35,362 DEBUG [RS:2;jenkins-hbase4:34145] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 07:15:35,363 DEBUG [RS:2;jenkins-hbase4:34145] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 07:15:35,363 DEBUG [RS:0;jenkins-hbase4:43187] zookeeper.ReadOnlyZKClient(139): Connect 0x48128c77 to 127.0.0.1:63390 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:35,365 DEBUG [RS:2;jenkins-hbase4:34145] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 07:15:35,368 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:35,368 DEBUG [RS:2;jenkins-hbase4:34145] zookeeper.ReadOnlyZKClient(139): Connect 0x33456cc8 to 127.0.0.1:63390 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:35,375 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 07:15:35,381 DEBUG [RS:1;jenkins-hbase4:40483] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f7cc2d4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:35,382 DEBUG [RS:1;jenkins-hbase4:40483] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2b4343e3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 07:15:35,384 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/info 2023-07-18 07:15:35,384 DEBUG [RS:0;jenkins-hbase4:43187] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@c78cbc4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:35,384 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 07:15:35,384 DEBUG [RS:0;jenkins-hbase4:43187] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@799021e1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 07:15:35,385 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:35,385 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 07:15:35,387 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/rep_barrier 2023-07-18 07:15:35,387 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 07:15:35,388 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:35,388 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 07:15:35,389 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/table 2023-07-18 07:15:35,391 DEBUG [RS:2;jenkins-hbase4:34145] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3d8284d0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:35,391 DEBUG [RS:2;jenkins-hbase4:34145] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@39bd07b8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 07:15:35,393 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 07:15:35,394 DEBUG [RS:1;jenkins-hbase4:40483] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:40483 2023-07-18 07:15:35,394 INFO [RS:1;jenkins-hbase4:40483] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 07:15:35,394 INFO [RS:1;jenkins-hbase4:40483] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 07:15:35,394 DEBUG [RS:1;jenkins-hbase4:40483] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 07:15:35,395 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:35,395 INFO [RS:1;jenkins-hbase4:40483] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38555,1689664534957 with isa=jenkins-hbase4.apache.org/172.31.14.131:40483, startcode=1689664535069 2023-07-18 07:15:35,395 DEBUG [RS:1;jenkins-hbase4:40483] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 07:15:35,396 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740 2023-07-18 07:15:35,397 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740 2023-07-18 07:15:35,399 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45159, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 07:15:35,401 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38555] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:35,401 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 07:15:35,401 DEBUG [RS:0;jenkins-hbase4:43187] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:43187 2023-07-18 07:15:35,401 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38555,1689664534957] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 07:15:35,401 INFO [RS:0;jenkins-hbase4:43187] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 07:15:35,402 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38555,1689664534957] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-18 07:15:35,402 INFO [RS:0;jenkins-hbase4:43187] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 07:15:35,402 DEBUG [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 07:15:35,403 INFO [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38555,1689664534957 with isa=jenkins-hbase4.apache.org/172.31.14.131:43187, startcode=1689664535022 2023-07-18 07:15:35,403 DEBUG [RS:0;jenkins-hbase4:43187] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 07:15:35,404 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 07:15:35,405 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60023, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 07:15:35,405 DEBUG [RS:2;jenkins-hbase4:34145] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:34145 2023-07-18 07:15:35,405 INFO [RS:2;jenkins-hbase4:34145] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 07:15:35,405 INFO [RS:2;jenkins-hbase4:34145] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 07:15:35,405 DEBUG [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 07:15:35,405 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38555] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:35,405 DEBUG [RS:1;jenkins-hbase4:40483] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d 2023-07-18 07:15:35,405 DEBUG [RS:1;jenkins-hbase4:40483] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43393 2023-07-18 07:15:35,405 DEBUG [RS:1;jenkins-hbase4:40483] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44151 2023-07-18 07:15:35,405 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38555,1689664534957] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 07:15:35,405 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38555,1689664534957] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-18 07:15:35,405 DEBUG [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d 2023-07-18 07:15:35,405 INFO [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38555,1689664534957 with isa=jenkins-hbase4.apache.org/172.31.14.131:34145, startcode=1689664535113 2023-07-18 07:15:35,406 DEBUG [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43393 2023-07-18 07:15:35,406 DEBUG [RS:2;jenkins-hbase4:34145] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 07:15:35,406 DEBUG [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44151 2023-07-18 07:15:35,407 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:35,407 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42633, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 07:15:35,407 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38555] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:35,407 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38555,1689664534957] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 07:15:35,407 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38555,1689664534957] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-18 07:15:35,408 DEBUG [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d 2023-07-18 07:15:35,408 DEBUG [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43393 2023-07-18 07:15:35,408 DEBUG [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44151 2023-07-18 07:15:35,412 DEBUG [RS:1;jenkins-hbase4:40483] zookeeper.ZKUtil(162): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:35,412 WARN [RS:1;jenkins-hbase4:40483] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 07:15:35,412 INFO [RS:1;jenkins-hbase4:40483] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:35,413 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:35,413 DEBUG [RS:0;jenkins-hbase4:43187] zookeeper.ZKUtil(162): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:35,413 DEBUG [RS:2;jenkins-hbase4:34145] zookeeper.ZKUtil(162): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:35,413 WARN [RS:0;jenkins-hbase4:43187] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 07:15:35,413 WARN [RS:2;jenkins-hbase4:34145] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 07:15:35,413 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40483,1689664535069] 2023-07-18 07:15:35,413 INFO [RS:2;jenkins-hbase4:34145] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:35,413 DEBUG [RS:1;jenkins-hbase4:40483] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/WALs/jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:35,413 DEBUG [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/WALs/jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:35,413 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43187,1689664535022] 2023-07-18 07:15:35,413 INFO [RS:0;jenkins-hbase4:43187] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:35,413 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10353107680, jitterRate=-0.03579171001911163}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 07:15:35,413 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 07:15:35,413 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 07:15:35,413 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 07:15:35,413 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 07:15:35,413 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 07:15:35,413 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 07:15:35,414 DEBUG [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/WALs/jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:35,414 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34145,1689664535113] 2023-07-18 07:15:35,418 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 07:15:35,419 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 07:15:35,425 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-18 07:15:35,425 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-18 07:15:35,426 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-18 07:15:35,427 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-18 07:15:35,429 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-18 07:15:35,430 DEBUG [RS:1;jenkins-hbase4:40483] zookeeper.ZKUtil(162): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:35,430 DEBUG [RS:2;jenkins-hbase4:34145] zookeeper.ZKUtil(162): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:35,430 DEBUG [RS:1;jenkins-hbase4:40483] zookeeper.ZKUtil(162): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:35,430 DEBUG [RS:0;jenkins-hbase4:43187] zookeeper.ZKUtil(162): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:35,430 DEBUG [RS:2;jenkins-hbase4:34145] zookeeper.ZKUtil(162): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:35,430 DEBUG [RS:1;jenkins-hbase4:40483] zookeeper.ZKUtil(162): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:35,431 DEBUG [RS:0;jenkins-hbase4:43187] zookeeper.ZKUtil(162): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:35,431 DEBUG [RS:2;jenkins-hbase4:34145] zookeeper.ZKUtil(162): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:35,431 DEBUG [RS:0;jenkins-hbase4:43187] zookeeper.ZKUtil(162): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:35,431 DEBUG [RS:1;jenkins-hbase4:40483] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 07:15:35,431 INFO [RS:1;jenkins-hbase4:40483] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 07:15:35,431 DEBUG [RS:2;jenkins-hbase4:34145] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 07:15:35,432 DEBUG [RS:0;jenkins-hbase4:43187] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 07:15:35,433 INFO [RS:2;jenkins-hbase4:34145] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 07:15:35,433 INFO [RS:0;jenkins-hbase4:43187] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 07:15:35,434 INFO [RS:1;jenkins-hbase4:40483] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 07:15:35,434 INFO [RS:2;jenkins-hbase4:34145] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 07:15:35,434 INFO [RS:1;jenkins-hbase4:40483] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 07:15:35,434 INFO [RS:1;jenkins-hbase4:40483] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,434 INFO [RS:2;jenkins-hbase4:34145] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 07:15:35,434 INFO [RS:2;jenkins-hbase4:34145] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,434 INFO [RS:1;jenkins-hbase4:40483] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 07:15:35,435 INFO [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 07:15:35,436 INFO [RS:0;jenkins-hbase4:43187] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 07:15:35,439 INFO [RS:2;jenkins-hbase4:34145] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,439 INFO [RS:0;jenkins-hbase4:43187] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 07:15:35,439 DEBUG [RS:2;jenkins-hbase4:34145] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,439 INFO [RS:1;jenkins-hbase4:40483] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,439 DEBUG [RS:2;jenkins-hbase4:34145] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,439 INFO [RS:0;jenkins-hbase4:43187] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,440 DEBUG [RS:1;jenkins-hbase4:40483] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,440 DEBUG [RS:2;jenkins-hbase4:34145] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,440 DEBUG [RS:1;jenkins-hbase4:40483] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,440 INFO [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 07:15:35,440 DEBUG [RS:2;jenkins-hbase4:34145] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,440 DEBUG [RS:1;jenkins-hbase4:40483] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,441 DEBUG [RS:2;jenkins-hbase4:34145] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,441 DEBUG [RS:1;jenkins-hbase4:40483] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,441 DEBUG [RS:2;jenkins-hbase4:34145] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 07:15:35,441 DEBUG [RS:1;jenkins-hbase4:40483] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,441 DEBUG [RS:2;jenkins-hbase4:34145] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,441 DEBUG [RS:1;jenkins-hbase4:40483] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 07:15:35,441 DEBUG [RS:2;jenkins-hbase4:34145] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,441 DEBUG [RS:2;jenkins-hbase4:34145] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,441 DEBUG [RS:1;jenkins-hbase4:40483] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,442 DEBUG [RS:2;jenkins-hbase4:34145] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,442 DEBUG [RS:1;jenkins-hbase4:40483] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,442 DEBUG [RS:1;jenkins-hbase4:40483] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,442 DEBUG [RS:1;jenkins-hbase4:40483] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,443 INFO [RS:0;jenkins-hbase4:43187] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,444 INFO [RS:2;jenkins-hbase4:34145] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,444 DEBUG [RS:0;jenkins-hbase4:43187] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,444 INFO [RS:2;jenkins-hbase4:34145] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,444 DEBUG [RS:0;jenkins-hbase4:43187] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,447 INFO [RS:2;jenkins-hbase4:34145] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,444 INFO [RS:1;jenkins-hbase4:40483] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,447 DEBUG [RS:0;jenkins-hbase4:43187] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,447 INFO [RS:1;jenkins-hbase4:40483] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,451 DEBUG [RS:0;jenkins-hbase4:43187] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,451 INFO [RS:1;jenkins-hbase4:40483] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,451 DEBUG [RS:0;jenkins-hbase4:43187] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,451 DEBUG [RS:0;jenkins-hbase4:43187] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 07:15:35,451 DEBUG [RS:0;jenkins-hbase4:43187] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,451 DEBUG [RS:0;jenkins-hbase4:43187] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,451 DEBUG [RS:0;jenkins-hbase4:43187] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,451 DEBUG [RS:0;jenkins-hbase4:43187] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:35,455 INFO [RS:0;jenkins-hbase4:43187] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,455 INFO [RS:0;jenkins-hbase4:43187] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,455 INFO [RS:0;jenkins-hbase4:43187] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,471 INFO [RS:2;jenkins-hbase4:34145] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 07:15:35,471 INFO [RS:1;jenkins-hbase4:40483] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 07:15:35,471 INFO [RS:2;jenkins-hbase4:34145] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34145,1689664535113-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,471 INFO [RS:1;jenkins-hbase4:40483] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40483,1689664535069-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,478 INFO [RS:0;jenkins-hbase4:43187] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 07:15:35,478 INFO [RS:0;jenkins-hbase4:43187] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43187,1689664535022-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,494 INFO [RS:1;jenkins-hbase4:40483] regionserver.Replication(203): jenkins-hbase4.apache.org,40483,1689664535069 started 2023-07-18 07:15:35,494 INFO [RS:1;jenkins-hbase4:40483] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40483,1689664535069, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40483, sessionid=0x10177493c9e0002 2023-07-18 07:15:35,494 INFO [RS:2;jenkins-hbase4:34145] regionserver.Replication(203): jenkins-hbase4.apache.org,34145,1689664535113 started 2023-07-18 07:15:35,494 DEBUG [RS:1;jenkins-hbase4:40483] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 07:15:35,494 INFO [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34145,1689664535113, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34145, sessionid=0x10177493c9e0003 2023-07-18 07:15:35,494 DEBUG [RS:1;jenkins-hbase4:40483] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:35,494 DEBUG [RS:2;jenkins-hbase4:34145] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 07:15:35,494 DEBUG [RS:2;jenkins-hbase4:34145] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:35,494 DEBUG [RS:2;jenkins-hbase4:34145] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34145,1689664535113' 2023-07-18 07:15:35,494 DEBUG [RS:1;jenkins-hbase4:40483] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40483,1689664535069' 2023-07-18 07:15:35,495 DEBUG [RS:1;jenkins-hbase4:40483] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 07:15:35,495 DEBUG [RS:2;jenkins-hbase4:34145] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 07:15:35,495 DEBUG [RS:1;jenkins-hbase4:40483] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 07:15:35,495 DEBUG [RS:2;jenkins-hbase4:34145] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 07:15:35,496 DEBUG [RS:2;jenkins-hbase4:34145] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 07:15:35,496 DEBUG [RS:2;jenkins-hbase4:34145] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 07:15:35,496 DEBUG [RS:2;jenkins-hbase4:34145] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:35,496 DEBUG [RS:2;jenkins-hbase4:34145] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34145,1689664535113' 2023-07-18 07:15:35,496 DEBUG [RS:2;jenkins-hbase4:34145] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 07:15:35,496 DEBUG [RS:2;jenkins-hbase4:34145] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 07:15:35,497 DEBUG [RS:2;jenkins-hbase4:34145] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 07:15:35,497 INFO [RS:2;jenkins-hbase4:34145] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 07:15:35,497 INFO [RS:2;jenkins-hbase4:34145] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 07:15:35,497 DEBUG [RS:1;jenkins-hbase4:40483] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 07:15:35,497 DEBUG [RS:1;jenkins-hbase4:40483] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 07:15:35,497 DEBUG [RS:1;jenkins-hbase4:40483] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:35,497 DEBUG [RS:1;jenkins-hbase4:40483] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40483,1689664535069' 2023-07-18 07:15:35,497 DEBUG [RS:1;jenkins-hbase4:40483] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 07:15:35,498 DEBUG [RS:1;jenkins-hbase4:40483] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 07:15:35,498 DEBUG [RS:1;jenkins-hbase4:40483] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 07:15:35,498 INFO [RS:1;jenkins-hbase4:40483] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 07:15:35,498 INFO [RS:1;jenkins-hbase4:40483] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 07:15:35,498 INFO [RS:0;jenkins-hbase4:43187] regionserver.Replication(203): jenkins-hbase4.apache.org,43187,1689664535022 started 2023-07-18 07:15:35,499 INFO [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43187,1689664535022, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43187, sessionid=0x10177493c9e0001 2023-07-18 07:15:35,499 DEBUG [RS:0;jenkins-hbase4:43187] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 07:15:35,499 DEBUG [RS:0;jenkins-hbase4:43187] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:35,499 DEBUG [RS:0;jenkins-hbase4:43187] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43187,1689664535022' 2023-07-18 07:15:35,499 DEBUG [RS:0;jenkins-hbase4:43187] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 07:15:35,499 DEBUG [RS:0;jenkins-hbase4:43187] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 07:15:35,499 DEBUG [RS:0;jenkins-hbase4:43187] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 07:15:35,500 DEBUG [RS:0;jenkins-hbase4:43187] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 07:15:35,500 DEBUG [RS:0;jenkins-hbase4:43187] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:35,500 DEBUG [RS:0;jenkins-hbase4:43187] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43187,1689664535022' 2023-07-18 07:15:35,500 DEBUG [RS:0;jenkins-hbase4:43187] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 07:15:35,500 DEBUG [RS:0;jenkins-hbase4:43187] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 07:15:35,500 DEBUG [RS:0;jenkins-hbase4:43187] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 07:15:35,500 INFO [RS:0;jenkins-hbase4:43187] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 07:15:35,500 INFO [RS:0;jenkins-hbase4:43187] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 07:15:35,579 DEBUG [jenkins-hbase4:38555] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-18 07:15:35,579 DEBUG [jenkins-hbase4:38555] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:35,579 DEBUG [jenkins-hbase4:38555] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:35,579 DEBUG [jenkins-hbase4:38555] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:35,579 DEBUG [jenkins-hbase4:38555] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:35,580 DEBUG [jenkins-hbase4:38555] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:35,581 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43187,1689664535022, state=OPENING 2023-07-18 07:15:35,583 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-18 07:15:35,584 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:35,584 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 07:15:35,584 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43187,1689664535022}] 2023-07-18 07:15:35,599 INFO [RS:2;jenkins-hbase4:34145] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34145%2C1689664535113, suffix=, logDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/WALs/jenkins-hbase4.apache.org,34145,1689664535113, archiveDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/oldWALs, maxLogs=32 2023-07-18 07:15:35,600 INFO [RS:1;jenkins-hbase4:40483] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40483%2C1689664535069, suffix=, logDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/WALs/jenkins-hbase4.apache.org,40483,1689664535069, archiveDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/oldWALs, maxLogs=32 2023-07-18 07:15:35,602 INFO [RS:0;jenkins-hbase4:43187] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43187%2C1689664535022, suffix=, logDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/WALs/jenkins-hbase4.apache.org,43187,1689664535022, archiveDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/oldWALs, maxLogs=32 2023-07-18 07:15:35,615 WARN [ReadOnlyZKClient-127.0.0.1:63390@0x43c2f650] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-18 07:15:35,616 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38555,1689664534957] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 07:15:35,617 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45913,DS-971e1925-2fd4-4c39-be24-7f9fcf0a5b47,DISK] 2023-07-18 07:15:35,618 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38199,DS-41b8dfa2-90aa-4f84-b2a4-952d5216a8d5,DISK] 2023-07-18 07:15:35,618 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44193,DS-1be37284-7d24-4a11-875d-bd50b01a56c1,DISK] 2023-07-18 07:15:35,629 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44193,DS-1be37284-7d24-4a11-875d-bd50b01a56c1,DISK] 2023-07-18 07:15:35,630 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38199,DS-41b8dfa2-90aa-4f84-b2a4-952d5216a8d5,DISK] 2023-07-18 07:15:35,630 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45913,DS-971e1925-2fd4-4c39-be24-7f9fcf0a5b47,DISK] 2023-07-18 07:15:35,635 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45820, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 07:15:35,645 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43187] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:45820 deadline: 1689664595635, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:35,646 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45913,DS-971e1925-2fd4-4c39-be24-7f9fcf0a5b47,DISK] 2023-07-18 07:15:35,646 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38199,DS-41b8dfa2-90aa-4f84-b2a4-952d5216a8d5,DISK] 2023-07-18 07:15:35,646 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44193,DS-1be37284-7d24-4a11-875d-bd50b01a56c1,DISK] 2023-07-18 07:15:35,647 INFO [RS:1;jenkins-hbase4:40483] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/WALs/jenkins-hbase4.apache.org,40483,1689664535069/jenkins-hbase4.apache.org%2C40483%2C1689664535069.1689664535600 2023-07-18 07:15:35,647 DEBUG [RS:1;jenkins-hbase4:40483] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45913,DS-971e1925-2fd4-4c39-be24-7f9fcf0a5b47,DISK], DatanodeInfoWithStorage[127.0.0.1:38199,DS-41b8dfa2-90aa-4f84-b2a4-952d5216a8d5,DISK], DatanodeInfoWithStorage[127.0.0.1:44193,DS-1be37284-7d24-4a11-875d-bd50b01a56c1,DISK]] 2023-07-18 07:15:35,648 INFO [RS:0;jenkins-hbase4:43187] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/WALs/jenkins-hbase4.apache.org,43187,1689664535022/jenkins-hbase4.apache.org%2C43187%2C1689664535022.1689664535602 2023-07-18 07:15:35,650 DEBUG [RS:0;jenkins-hbase4:43187] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38199,DS-41b8dfa2-90aa-4f84-b2a4-952d5216a8d5,DISK], DatanodeInfoWithStorage[127.0.0.1:45913,DS-971e1925-2fd4-4c39-be24-7f9fcf0a5b47,DISK], DatanodeInfoWithStorage[127.0.0.1:44193,DS-1be37284-7d24-4a11-875d-bd50b01a56c1,DISK]] 2023-07-18 07:15:35,651 INFO [RS:2;jenkins-hbase4:34145] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/WALs/jenkins-hbase4.apache.org,34145,1689664535113/jenkins-hbase4.apache.org%2C34145%2C1689664535113.1689664535599 2023-07-18 07:15:35,651 DEBUG [RS:2;jenkins-hbase4:34145] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:45913,DS-971e1925-2fd4-4c39-be24-7f9fcf0a5b47,DISK], DatanodeInfoWithStorage[127.0.0.1:38199,DS-41b8dfa2-90aa-4f84-b2a4-952d5216a8d5,DISK], DatanodeInfoWithStorage[127.0.0.1:44193,DS-1be37284-7d24-4a11-875d-bd50b01a56c1,DISK]] 2023-07-18 07:15:35,739 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:35,741 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 07:15:35,742 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45824, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 07:15:35,746 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-18 07:15:35,746 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:35,748 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43187%2C1689664535022.meta, suffix=.meta, logDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/WALs/jenkins-hbase4.apache.org,43187,1689664535022, archiveDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/oldWALs, maxLogs=32 2023-07-18 07:15:35,763 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45913,DS-971e1925-2fd4-4c39-be24-7f9fcf0a5b47,DISK] 2023-07-18 07:15:35,764 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44193,DS-1be37284-7d24-4a11-875d-bd50b01a56c1,DISK] 2023-07-18 07:15:35,764 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38199,DS-41b8dfa2-90aa-4f84-b2a4-952d5216a8d5,DISK] 2023-07-18 07:15:35,766 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/WALs/jenkins-hbase4.apache.org,43187,1689664535022/jenkins-hbase4.apache.org%2C43187%2C1689664535022.meta.1689664535748.meta 2023-07-18 07:15:35,766 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38199,DS-41b8dfa2-90aa-4f84-b2a4-952d5216a8d5,DISK], DatanodeInfoWithStorage[127.0.0.1:45913,DS-971e1925-2fd4-4c39-be24-7f9fcf0a5b47,DISK], DatanodeInfoWithStorage[127.0.0.1:44193,DS-1be37284-7d24-4a11-875d-bd50b01a56c1,DISK]] 2023-07-18 07:15:35,766 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:35,766 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 07:15:35,767 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-18 07:15:35,767 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-18 07:15:35,767 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-18 07:15:35,767 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:35,767 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-18 07:15:35,767 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-18 07:15:35,769 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-18 07:15:35,770 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/info 2023-07-18 07:15:35,770 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/info 2023-07-18 07:15:35,771 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-18 07:15:35,771 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:35,771 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-18 07:15:35,772 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/rep_barrier 2023-07-18 07:15:35,772 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/rep_barrier 2023-07-18 07:15:35,772 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-18 07:15:35,773 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:35,773 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-18 07:15:35,774 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/table 2023-07-18 07:15:35,774 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/table 2023-07-18 07:15:35,774 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-18 07:15:35,775 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:35,776 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740 2023-07-18 07:15:35,777 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740 2023-07-18 07:15:35,779 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-18 07:15:35,780 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-18 07:15:35,781 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9720524480, jitterRate=-0.09470561146736145}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-18 07:15:35,781 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-18 07:15:35,781 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689664535739 2023-07-18 07:15:35,786 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-18 07:15:35,787 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-18 07:15:35,787 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,43187,1689664535022, state=OPEN 2023-07-18 07:15:35,790 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-18 07:15:35,790 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-18 07:15:35,792 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-18 07:15:35,792 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,43187,1689664535022 in 206 msec 2023-07-18 07:15:35,794 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-18 07:15:35,794 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 366 msec 2023-07-18 07:15:35,795 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 491 msec 2023-07-18 07:15:35,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689664535795, completionTime=-1 2023-07-18 07:15:35,795 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-18 07:15:35,795 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-18 07:15:35,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-18 07:15:35,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689664595800 2023-07-18 07:15:35,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689664655800 2023-07-18 07:15:35,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-18 07:15:35,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38555,1689664534957-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38555,1689664534957-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38555,1689664534957-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:38555, period=300000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:35,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-18 07:15:35,806 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:35,807 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-18 07:15:35,807 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-18 07:15:35,808 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:35,809 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 07:15:35,811 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/hbase/namespace/a9ebb423e48347789f3922db24f9672a 2023-07-18 07:15:35,811 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/hbase/namespace/a9ebb423e48347789f3922db24f9672a empty. 2023-07-18 07:15:35,812 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/hbase/namespace/a9ebb423e48347789f3922db24f9672a 2023-07-18 07:15:35,812 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-18 07:15:35,827 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:35,828 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => a9ebb423e48347789f3922db24f9672a, NAME => 'hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp 2023-07-18 07:15:35,843 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:35,843 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing a9ebb423e48347789f3922db24f9672a, disabling compactions & flushes 2023-07-18 07:15:35,843 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a. 2023-07-18 07:15:35,843 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a. 2023-07-18 07:15:35,843 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a. after waiting 0 ms 2023-07-18 07:15:35,843 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a. 2023-07-18 07:15:35,843 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a. 2023-07-18 07:15:35,843 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for a9ebb423e48347789f3922db24f9672a: 2023-07-18 07:15:35,846 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 07:15:35,847 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689664535847"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664535847"}]},"ts":"1689664535847"} 2023-07-18 07:15:35,849 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 07:15:35,850 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 07:15:35,850 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664535850"}]},"ts":"1689664535850"} 2023-07-18 07:15:35,851 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-18 07:15:35,855 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:35,855 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:35,855 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:35,855 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:35,855 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:35,856 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a9ebb423e48347789f3922db24f9672a, ASSIGN}] 2023-07-18 07:15:35,857 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a9ebb423e48347789f3922db24f9672a, ASSIGN 2023-07-18 07:15:35,858 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=a9ebb423e48347789f3922db24f9672a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34145,1689664535113; forceNewPlan=false, retain=false 2023-07-18 07:15:35,951 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38555,1689664534957] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:35,952 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38555,1689664534957] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-18 07:15:35,954 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:35,955 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 07:15:35,957 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/hbase/rsgroup/a9b68dc82f735620d2836da347a5b8c3 2023-07-18 07:15:35,957 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/hbase/rsgroup/a9b68dc82f735620d2836da347a5b8c3 empty. 2023-07-18 07:15:35,958 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/hbase/rsgroup/a9b68dc82f735620d2836da347a5b8c3 2023-07-18 07:15:35,958 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-18 07:15:35,977 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:35,979 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => a9b68dc82f735620d2836da347a5b8c3, NAME => 'hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp 2023-07-18 07:15:36,008 INFO [jenkins-hbase4:38555] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 07:15:36,010 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=a9ebb423e48347789f3922db24f9672a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:36,010 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:36,010 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689664536010"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664536010"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664536010"}]},"ts":"1689664536010"} 2023-07-18 07:15:36,010 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing a9b68dc82f735620d2836da347a5b8c3, disabling compactions & flushes 2023-07-18 07:15:36,010 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3. 2023-07-18 07:15:36,010 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3. 2023-07-18 07:15:36,010 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3. after waiting 0 ms 2023-07-18 07:15:36,010 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3. 2023-07-18 07:15:36,010 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3. 2023-07-18 07:15:36,010 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for a9b68dc82f735620d2836da347a5b8c3: 2023-07-18 07:15:36,013 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 07:15:36,014 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689664536014"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664536014"}]},"ts":"1689664536014"} 2023-07-18 07:15:36,019 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE; OpenRegionProcedure a9ebb423e48347789f3922db24f9672a, server=jenkins-hbase4.apache.org,34145,1689664535113}] 2023-07-18 07:15:36,020 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 07:15:36,024 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 07:15:36,024 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664536024"}]},"ts":"1689664536024"} 2023-07-18 07:15:36,031 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-18 07:15:36,035 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:36,036 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:36,036 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:36,036 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:36,036 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:36,036 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=a9b68dc82f735620d2836da347a5b8c3, ASSIGN}] 2023-07-18 07:15:36,037 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=a9b68dc82f735620d2836da347a5b8c3, ASSIGN 2023-07-18 07:15:36,038 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=8, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=a9b68dc82f735620d2836da347a5b8c3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43187,1689664535022; forceNewPlan=false, retain=false 2023-07-18 07:15:36,175 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:36,175 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-18 07:15:36,177 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57390, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-18 07:15:36,180 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a. 2023-07-18 07:15:36,181 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a9ebb423e48347789f3922db24f9672a, NAME => 'hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:36,181 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace a9ebb423e48347789f3922db24f9672a 2023-07-18 07:15:36,181 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:36,181 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a9ebb423e48347789f3922db24f9672a 2023-07-18 07:15:36,181 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a9ebb423e48347789f3922db24f9672a 2023-07-18 07:15:36,184 INFO [StoreOpener-a9ebb423e48347789f3922db24f9672a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region a9ebb423e48347789f3922db24f9672a 2023-07-18 07:15:36,185 DEBUG [StoreOpener-a9ebb423e48347789f3922db24f9672a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/namespace/a9ebb423e48347789f3922db24f9672a/info 2023-07-18 07:15:36,185 DEBUG [StoreOpener-a9ebb423e48347789f3922db24f9672a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/namespace/a9ebb423e48347789f3922db24f9672a/info 2023-07-18 07:15:36,185 INFO [StoreOpener-a9ebb423e48347789f3922db24f9672a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a9ebb423e48347789f3922db24f9672a columnFamilyName info 2023-07-18 07:15:36,186 INFO [StoreOpener-a9ebb423e48347789f3922db24f9672a-1] regionserver.HStore(310): Store=a9ebb423e48347789f3922db24f9672a/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:36,186 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/namespace/a9ebb423e48347789f3922db24f9672a 2023-07-18 07:15:36,187 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/namespace/a9ebb423e48347789f3922db24f9672a 2023-07-18 07:15:36,188 INFO [jenkins-hbase4:38555] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 07:15:36,189 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=a9b68dc82f735620d2836da347a5b8c3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:36,189 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689664536189"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664536189"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664536189"}]},"ts":"1689664536189"} 2023-07-18 07:15:36,190 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a9ebb423e48347789f3922db24f9672a 2023-07-18 07:15:36,191 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=8, state=RUNNABLE; OpenRegionProcedure a9b68dc82f735620d2836da347a5b8c3, server=jenkins-hbase4.apache.org,43187,1689664535022}] 2023-07-18 07:15:36,194 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/namespace/a9ebb423e48347789f3922db24f9672a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:36,195 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a9ebb423e48347789f3922db24f9672a; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9461142720, jitterRate=-0.11886242032051086}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:36,195 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a9ebb423e48347789f3922db24f9672a: 2023-07-18 07:15:36,196 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a., pid=7, masterSystemTime=1689664536175 2023-07-18 07:15:36,198 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a. 2023-07-18 07:15:36,199 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a. 2023-07-18 07:15:36,199 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=a9ebb423e48347789f3922db24f9672a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:36,199 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689664536199"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664536199"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664536199"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664536199"}]},"ts":"1689664536199"} 2023-07-18 07:15:36,203 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-18 07:15:36,203 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; OpenRegionProcedure a9ebb423e48347789f3922db24f9672a, server=jenkins-hbase4.apache.org,34145,1689664535113 in 181 msec 2023-07-18 07:15:36,204 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-18 07:15:36,205 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=a9ebb423e48347789f3922db24f9672a, ASSIGN in 347 msec 2023-07-18 07:15:36,205 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 07:15:36,205 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664536205"}]},"ts":"1689664536205"} 2023-07-18 07:15:36,206 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-18 07:15:36,208 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-18 07:15:36,208 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 07:15:36,209 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 401 msec 2023-07-18 07:15:36,213 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-18 07:15:36,213 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:36,216 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 07:15:36,217 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57396, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 07:15:36,219 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-18 07:15:36,227 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 07:15:36,229 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 9 msec 2023-07-18 07:15:36,231 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 07:15:36,235 DEBUG [PEWorker-1] procedure.MasterProcedureScheduler(526): NAMESPACE 'hbase', shared lock count=1 2023-07-18 07:15:36,235 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-18 07:15:36,348 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3. 2023-07-18 07:15:36,348 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a9b68dc82f735620d2836da347a5b8c3, NAME => 'hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:36,348 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-18 07:15:36,348 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3. service=MultiRowMutationService 2023-07-18 07:15:36,348 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-18 07:15:36,349 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup a9b68dc82f735620d2836da347a5b8c3 2023-07-18 07:15:36,349 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:36,349 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a9b68dc82f735620d2836da347a5b8c3 2023-07-18 07:15:36,349 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a9b68dc82f735620d2836da347a5b8c3 2023-07-18 07:15:36,350 INFO [StoreOpener-a9b68dc82f735620d2836da347a5b8c3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region a9b68dc82f735620d2836da347a5b8c3 2023-07-18 07:15:36,351 DEBUG [StoreOpener-a9b68dc82f735620d2836da347a5b8c3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/rsgroup/a9b68dc82f735620d2836da347a5b8c3/m 2023-07-18 07:15:36,351 DEBUG [StoreOpener-a9b68dc82f735620d2836da347a5b8c3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/rsgroup/a9b68dc82f735620d2836da347a5b8c3/m 2023-07-18 07:15:36,351 INFO [StoreOpener-a9b68dc82f735620d2836da347a5b8c3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a9b68dc82f735620d2836da347a5b8c3 columnFamilyName m 2023-07-18 07:15:36,352 INFO [StoreOpener-a9b68dc82f735620d2836da347a5b8c3-1] regionserver.HStore(310): Store=a9b68dc82f735620d2836da347a5b8c3/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:36,353 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/rsgroup/a9b68dc82f735620d2836da347a5b8c3 2023-07-18 07:15:36,353 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/rsgroup/a9b68dc82f735620d2836da347a5b8c3 2023-07-18 07:15:36,355 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a9b68dc82f735620d2836da347a5b8c3 2023-07-18 07:15:36,358 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/rsgroup/a9b68dc82f735620d2836da347a5b8c3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:36,358 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a9b68dc82f735620d2836da347a5b8c3; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@60578c37, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:36,358 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a9b68dc82f735620d2836da347a5b8c3: 2023-07-18 07:15:36,359 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3., pid=9, masterSystemTime=1689664536345 2023-07-18 07:15:36,360 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3. 2023-07-18 07:15:36,360 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3. 2023-07-18 07:15:36,361 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=8 updating hbase:meta row=a9b68dc82f735620d2836da347a5b8c3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:36,361 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689664536361"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664536361"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664536361"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664536361"}]},"ts":"1689664536361"} 2023-07-18 07:15:36,363 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=8 2023-07-18 07:15:36,364 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=8, state=SUCCESS; OpenRegionProcedure a9b68dc82f735620d2836da347a5b8c3, server=jenkins-hbase4.apache.org,43187,1689664535022 in 171 msec 2023-07-18 07:15:36,365 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-18 07:15:36,365 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=a9b68dc82f735620d2836da347a5b8c3, ASSIGN in 328 msec 2023-07-18 07:15:36,372 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 07:15:36,377 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 145 msec 2023-07-18 07:15:36,377 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 07:15:36,377 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664536377"}]},"ts":"1689664536377"} 2023-07-18 07:15:36,378 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-18 07:15:36,380 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 07:15:36,382 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 430 msec 2023-07-18 07:15:36,386 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-18 07:15:36,389 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-18 07:15:36,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.227sec 2023-07-18 07:15:36,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-18 07:15:36,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-18 07:15:36,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-18 07:15:36,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38555,1689664534957-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-18 07:15:36,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38555,1689664534957-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-18 07:15:36,391 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-18 07:15:36,452 DEBUG [Listener at localhost/36955] zookeeper.ReadOnlyZKClient(139): Connect 0x168f4c7a to 127.0.0.1:63390 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:36,459 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38555,1689664534957] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-18 07:15:36,459 DEBUG [Listener at localhost/36955] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@407b873a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:36,459 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38555,1689664534957] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-18 07:15:36,463 DEBUG [hconnection-0x6596f8a2-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 07:15:36,466 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:36,466 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38555,1689664534957] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:36,468 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38555,1689664534957] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 07:15:36,469 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45836, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 07:15:36,469 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38555,1689664534957] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-18 07:15:36,471 INFO [Listener at localhost/36955] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,38555,1689664534957 2023-07-18 07:15:36,471 INFO [Listener at localhost/36955] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:36,473 DEBUG [Listener at localhost/36955] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-18 07:15:36,474 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35714, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-18 07:15:36,477 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-18 07:15:36,477 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:36,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-18 07:15:36,478 DEBUG [Listener at localhost/36955] zookeeper.ReadOnlyZKClient(139): Connect 0x015f1d44 to 127.0.0.1:63390 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:36,484 DEBUG [Listener at localhost/36955] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@53dea625, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:36,484 INFO [Listener at localhost/36955] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:63390 2023-07-18 07:15:36,488 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:36,489 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10177493c9e000a connected 2023-07-18 07:15:36,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:36,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:36,494 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-18 07:15:36,497 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-18 07:15:36,531 INFO [Listener at localhost/36955] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-18 07:15:36,532 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:36,532 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:36,532 INFO [Listener at localhost/36955] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-18 07:15:36,532 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-18 07:15:36,532 INFO [Listener at localhost/36955] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-18 07:15:36,532 INFO [Listener at localhost/36955] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-18 07:15:36,534 INFO [Listener at localhost/36955] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45449 2023-07-18 07:15:36,534 INFO [Listener at localhost/36955] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-18 07:15:36,535 DEBUG [Listener at localhost/36955] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-18 07:15:36,536 INFO [Listener at localhost/36955] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:36,537 INFO [Listener at localhost/36955] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-18 07:15:36,539 INFO [Listener at localhost/36955] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45449 connecting to ZooKeeper ensemble=127.0.0.1:63390 2023-07-18 07:15:36,547 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:454490x0, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-18 07:15:36,550 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45449-0x10177493c9e000b connected 2023-07-18 07:15:36,550 DEBUG [Listener at localhost/36955] zookeeper.ZKUtil(162): regionserver:45449-0x10177493c9e000b, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-18 07:15:36,553 DEBUG [Listener at localhost/36955] zookeeper.ZKUtil(162): regionserver:45449-0x10177493c9e000b, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-18 07:15:36,556 DEBUG [Listener at localhost/36955] zookeeper.ZKUtil(164): regionserver:45449-0x10177493c9e000b, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-18 07:15:36,558 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45449 2023-07-18 07:15:36,558 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45449 2023-07-18 07:15:36,560 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45449 2023-07-18 07:15:36,562 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45449 2023-07-18 07:15:36,562 DEBUG [Listener at localhost/36955] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45449 2023-07-18 07:15:36,564 INFO [Listener at localhost/36955] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-18 07:15:36,564 INFO [Listener at localhost/36955] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-18 07:15:36,564 INFO [Listener at localhost/36955] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-18 07:15:36,564 INFO [Listener at localhost/36955] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-18 07:15:36,564 INFO [Listener at localhost/36955] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-18 07:15:36,564 INFO [Listener at localhost/36955] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-18 07:15:36,565 INFO [Listener at localhost/36955] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-18 07:15:36,565 INFO [Listener at localhost/36955] http.HttpServer(1146): Jetty bound to port 39005 2023-07-18 07:15:36,565 INFO [Listener at localhost/36955] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-18 07:15:36,572 INFO [Listener at localhost/36955] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:36,572 INFO [Listener at localhost/36955] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7b0a60fa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/hadoop.log.dir/,AVAILABLE} 2023-07-18 07:15:36,572 INFO [Listener at localhost/36955] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:36,572 INFO [Listener at localhost/36955] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1bb8b1db{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-18 07:15:36,578 INFO [Listener at localhost/36955] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-18 07:15:36,579 INFO [Listener at localhost/36955] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-18 07:15:36,579 INFO [Listener at localhost/36955] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-18 07:15:36,579 INFO [Listener at localhost/36955] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-18 07:15:36,582 INFO [Listener at localhost/36955] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-18 07:15:36,583 INFO [Listener at localhost/36955] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@396047b6{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:36,585 INFO [Listener at localhost/36955] server.AbstractConnector(333): Started ServerConnector@7a007c37{HTTP/1.1, (http/1.1)}{0.0.0.0:39005} 2023-07-18 07:15:36,585 INFO [Listener at localhost/36955] server.Server(415): Started @44153ms 2023-07-18 07:15:36,588 INFO [RS:3;jenkins-hbase4:45449] regionserver.HRegionServer(951): ClusterId : 7963786a-d166-4f0b-9db9-0a27708879e8 2023-07-18 07:15:36,591 DEBUG [RS:3;jenkins-hbase4:45449] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-18 07:15:36,593 DEBUG [RS:3;jenkins-hbase4:45449] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-18 07:15:36,593 DEBUG [RS:3;jenkins-hbase4:45449] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-18 07:15:36,595 DEBUG [RS:3;jenkins-hbase4:45449] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-18 07:15:36,598 DEBUG [RS:3;jenkins-hbase4:45449] zookeeper.ReadOnlyZKClient(139): Connect 0x15086123 to 127.0.0.1:63390 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-18 07:15:36,606 DEBUG [RS:3;jenkins-hbase4:45449] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1bcf9df2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-18 07:15:36,606 DEBUG [RS:3;jenkins-hbase4:45449] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1ff00a53, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 07:15:36,630 DEBUG [RS:3;jenkins-hbase4:45449] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:45449 2023-07-18 07:15:36,630 INFO [RS:3;jenkins-hbase4:45449] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-18 07:15:36,630 INFO [RS:3;jenkins-hbase4:45449] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-18 07:15:36,630 DEBUG [RS:3;jenkins-hbase4:45449] regionserver.HRegionServer(1022): About to register with Master. 2023-07-18 07:15:36,630 INFO [RS:3;jenkins-hbase4:45449] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38555,1689664534957 with isa=jenkins-hbase4.apache.org/172.31.14.131:45449, startcode=1689664536531 2023-07-18 07:15:36,631 DEBUG [RS:3;jenkins-hbase4:45449] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-18 07:15:36,633 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53553, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-18 07:15:36,633 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38555] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45449,1689664536531 2023-07-18 07:15:36,634 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38555,1689664534957] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-18 07:15:36,634 DEBUG [RS:3;jenkins-hbase4:45449] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d 2023-07-18 07:15:36,634 DEBUG [RS:3;jenkins-hbase4:45449] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43393 2023-07-18 07:15:36,634 DEBUG [RS:3;jenkins-hbase4:45449] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44151 2023-07-18 07:15:36,638 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:36,638 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:36,638 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:36,638 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:36,639 DEBUG [RS:3;jenkins-hbase4:45449] zookeeper.ZKUtil(162): regionserver:45449-0x10177493c9e000b, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45449,1689664536531 2023-07-18 07:15:36,639 WARN [RS:3;jenkins-hbase4:45449] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-18 07:15:36,639 INFO [RS:3;jenkins-hbase4:45449] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-18 07:15:36,639 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45449,1689664536531] 2023-07-18 07:15:36,639 DEBUG [RS:3;jenkins-hbase4:45449] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/WALs/jenkins-hbase4.apache.org,45449,1689664536531 2023-07-18 07:15:36,640 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:36,640 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38555,1689664534957] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:36,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:36,645 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38555,1689664534957] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-18 07:15:36,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:36,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:36,647 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:36,647 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:36,647 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38555,1689664534957] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-18 07:15:36,648 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:36,649 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:36,649 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:36,649 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45449,1689664536531 2023-07-18 07:15:36,649 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45449,1689664536531 2023-07-18 07:15:36,649 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45449,1689664536531 2023-07-18 07:15:36,650 DEBUG [RS:3;jenkins-hbase4:45449] zookeeper.ZKUtil(162): regionserver:45449-0x10177493c9e000b, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:36,651 DEBUG [RS:3;jenkins-hbase4:45449] zookeeper.ZKUtil(162): regionserver:45449-0x10177493c9e000b, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:36,651 DEBUG [RS:3;jenkins-hbase4:45449] zookeeper.ZKUtil(162): regionserver:45449-0x10177493c9e000b, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:36,652 DEBUG [RS:3;jenkins-hbase4:45449] zookeeper.ZKUtil(162): regionserver:45449-0x10177493c9e000b, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45449,1689664536531 2023-07-18 07:15:36,653 DEBUG [RS:3;jenkins-hbase4:45449] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-18 07:15:36,653 INFO [RS:3;jenkins-hbase4:45449] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-18 07:15:36,655 INFO [RS:3;jenkins-hbase4:45449] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-18 07:15:36,655 INFO [RS:3;jenkins-hbase4:45449] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-18 07:15:36,655 INFO [RS:3;jenkins-hbase4:45449] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:36,662 INFO [RS:3;jenkins-hbase4:45449] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-18 07:15:36,665 INFO [RS:3;jenkins-hbase4:45449] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:36,665 DEBUG [RS:3;jenkins-hbase4:45449] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:36,665 DEBUG [RS:3;jenkins-hbase4:45449] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:36,665 DEBUG [RS:3;jenkins-hbase4:45449] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:36,665 DEBUG [RS:3;jenkins-hbase4:45449] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:36,665 DEBUG [RS:3;jenkins-hbase4:45449] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:36,665 DEBUG [RS:3;jenkins-hbase4:45449] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-18 07:15:36,665 DEBUG [RS:3;jenkins-hbase4:45449] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:36,665 DEBUG [RS:3;jenkins-hbase4:45449] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:36,665 DEBUG [RS:3;jenkins-hbase4:45449] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:36,665 DEBUG [RS:3;jenkins-hbase4:45449] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-18 07:15:36,668 INFO [RS:3;jenkins-hbase4:45449] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:36,668 INFO [RS:3;jenkins-hbase4:45449] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:36,668 INFO [RS:3;jenkins-hbase4:45449] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:36,689 INFO [RS:3;jenkins-hbase4:45449] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-18 07:15:36,689 INFO [RS:3;jenkins-hbase4:45449] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45449,1689664536531-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-18 07:15:36,706 INFO [RS:3;jenkins-hbase4:45449] regionserver.Replication(203): jenkins-hbase4.apache.org,45449,1689664536531 started 2023-07-18 07:15:36,707 INFO [RS:3;jenkins-hbase4:45449] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45449,1689664536531, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45449, sessionid=0x10177493c9e000b 2023-07-18 07:15:36,707 DEBUG [RS:3;jenkins-hbase4:45449] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-18 07:15:36,707 DEBUG [RS:3;jenkins-hbase4:45449] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45449,1689664536531 2023-07-18 07:15:36,707 DEBUG [RS:3;jenkins-hbase4:45449] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45449,1689664536531' 2023-07-18 07:15:36,707 DEBUG [RS:3;jenkins-hbase4:45449] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-18 07:15:36,707 DEBUG [RS:3;jenkins-hbase4:45449] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-18 07:15:36,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:36,708 DEBUG [RS:3;jenkins-hbase4:45449] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-18 07:15:36,708 DEBUG [RS:3;jenkins-hbase4:45449] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-18 07:15:36,709 DEBUG [RS:3;jenkins-hbase4:45449] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45449,1689664536531 2023-07-18 07:15:36,709 DEBUG [RS:3;jenkins-hbase4:45449] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45449,1689664536531' 2023-07-18 07:15:36,709 DEBUG [RS:3;jenkins-hbase4:45449] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-18 07:15:36,709 DEBUG [RS:3;jenkins-hbase4:45449] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-18 07:15:36,710 DEBUG [RS:3;jenkins-hbase4:45449] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-18 07:15:36,710 INFO [RS:3;jenkins-hbase4:45449] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-18 07:15:36,710 INFO [RS:3;jenkins-hbase4:45449] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-18 07:15:36,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:36,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:36,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:36,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:36,719 DEBUG [hconnection-0x298f9a8c-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-18 07:15:36,723 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45850, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-18 07:15:36,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:36,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:36,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38555] to rsgroup master 2023-07-18 07:15:36,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:36,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:35714 deadline: 1689665736731, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. 2023-07-18 07:15:36,732 WARN [Listener at localhost/36955] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:36,733 INFO [Listener at localhost/36955] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:36,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:36,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:36,734 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34145, jenkins-hbase4.apache.org:40483, jenkins-hbase4.apache.org:43187, jenkins-hbase4.apache.org:45449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:36,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:36,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:36,785 INFO [Listener at localhost/36955] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=556 (was 514) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 561586544@qtp-1565159054-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44213 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43187 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x15086123-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: hconnection-0x298f9a8c-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:40483-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d-prefix:jenkins-hbase4.apache.org,43187,1689664535022 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:43393 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1722280313-2310 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-74920ac7-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1882869650-2588 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data1/current/BP-29044841-172.31.14.131-1689664534220 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x21bb2893-SendThread(127.0.0.1:63390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2a78b461 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@78016663 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data6/current/BP-29044841-172.31.14.131-1689664534220 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp915121724-2283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36955-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x410836a2-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1882869650-2584 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/691978700.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1722280313-2314 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=45449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1478065905-2253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:43393 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 43393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x6596f8a2-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1722280313-2307 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/691978700.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x21bb2893 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1259606006.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 43393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp2077874689-2220 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x48128c77-SendThread(127.0.0.1:63390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_760308466_17 at /127.0.0.1:32868 [Receiving block BP-29044841-172.31.14.131-1689664534220:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:39713 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 44453 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=45449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/36955-SendThread(127.0.0.1:63390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-29044841-172.31.14.131-1689664534220:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x410836a2-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38555,1689664534957 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57544@0x35a21aab-SendThread(127.0.0.1:57544) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d-prefix:jenkins-hbase4.apache.org,43187,1689664535022.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43187 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=45449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_760308466_17 at /127.0.0.1:59788 [Receiving block BP-29044841-172.31.14.131-1689664534220:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 36955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:43393 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x21bb2893-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1478065905-2248-acceptor-0@458c56cd-ServerConnector@4e213774{HTTP/1.1, (http/1.1)}{0.0.0.0:37711} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:34145 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1882869650-2589 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@35d728d2 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data5/current/BP-29044841-172.31.14.131-1689664534220 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:39713 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43187 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data4/current/BP-29044841-172.31.14.131-1689664534220 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x48128c77 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1259606006.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:39713 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp2020849238-2322-acceptor-0@22673c20-ServerConnector@76b45728{HTTP/1.1, (http/1.1)}{0.0.0.0:33167} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@6266862b[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2020849238-2323 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 43393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:39713 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2063969769_17 at /127.0.0.1:36566 [Receiving block BP-29044841-172.31.14.131-1689664534220:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=40483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x168f4c7a-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x43c2f650-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x43c2f650 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1259606006.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689664535321 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-6bdf4556-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1722280313-2311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 943306956@qtp-1565159054-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RS:3;jenkins-hbase4:45449 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=45449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/36955.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: hconnection-0x410836a2-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@35a42664 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x15086123-SendThread(127.0.0.1:63390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/36955-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 2 on default port 40827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 4 on default port 40827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1478065905-2254 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36955-SendThread(127.0.0.1:63390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 4 on default port 36955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1722280313-2309 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:43393 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1478065905-2247 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/691978700.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@555cde6[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3dc23edc java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x410836a2-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2020849238-2321 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/691978700.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43187 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 44453 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-29044841-172.31.14.131-1689664534220:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:39713 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: jenkins-hbase4:38555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: qtp2077874689-2218 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:45449Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36955-SendThread(127.0.0.1:63390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Session-HouseKeeper-1f401c8d-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36955-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2020849238-2319 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/691978700.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 44453 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-29044841-172.31.14.131-1689664534220:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 43393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1259607702@qtp-219313989-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43095 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:39713 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x410836a2-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/MasterData-prefix:jenkins-hbase4.apache.org,38555,1689664534957 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36955.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x168f4c7a-SendThread(127.0.0.1:63390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-29044841-172.31.14.131-1689664534220:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x33456cc8-SendThread(127.0.0.1:63390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@6b2c1c1d[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-14034eff-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:63390 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: RS:2;jenkins-hbase4:34145-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x015f1d44 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1259606006.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x15086123 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1259606006.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=40483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x410836a2-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp915121724-2281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 40827 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2063969769_17 at /127.0.0.1:59774 [Receiving block BP-29044841-172.31.14.131-1689664534220:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57544@0x35a21aab-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp1882869650-2587 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2020849238-2320 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/691978700.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x33456cc8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1259606006.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-29044841-172.31.14.131-1689664534220:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@47a371df sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data3/current/BP-29044841-172.31.14.131-1689664534220 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@4252c65b java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=45449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:43393 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server idle connection scanner for port 36955 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_760308466_17 at /127.0.0.1:36564 [Receiving block BP-29044841-172.31.14.131-1689664534220:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x410836a2-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43187 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1882869650-2591 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x015f1d44-SendThread(127.0.0.1:63390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ProcessThread(sid:0 cport:63390): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data2/current/BP-29044841-172.31.14.131-1689664534220 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36955-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server idle connection scanner for port 43393 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp2077874689-2223 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@32ed83cd java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@1eaeb828 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1722280313-2313 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:43187 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2020849238-2318 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/691978700.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 36955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x410836a2-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36955 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1478065905-2252 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d-prefix:jenkins-hbase4.apache.org,34145,1689664535113 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1882869650-2590 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 525490453@qtp-332638597-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36477 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43187 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-29044841-172.31.14.131-1689664534220:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=40483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2063969769_17 at /127.0.0.1:32862 [Receiving block BP-29044841-172.31.14.131-1689664534220:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: BP-29044841-172.31.14.131-1689664534220 heartbeating to localhost/127.0.0.1:43393 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1071530391_17 at /127.0.0.1:36546 [Receiving block BP-29044841-172.31.14.131-1689664534220:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1478065905-2249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:45449-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp2077874689-2222 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x48128c77-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(2024341212) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: 1327446378@qtp-219313989-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1019364747_17 at /127.0.0.1:32830 [Receiving block BP-29044841-172.31.14.131-1689664534220:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp915121724-2279 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 40827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-29044841-172.31.14.131-1689664534220:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/44381-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-29044841-172.31.14.131-1689664534220:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43187 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-7e02dcdf-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1882869650-2586 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2020849238-2324 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d-prefix:jenkins-hbase4.apache.org,40483,1689664535069 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x015f1d44-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1071530391_17 at /127.0.0.1:32856 [Receiving block BP-29044841-172.31.14.131-1689664534220:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-29044841-172.31.14.131-1689664534220:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp915121724-2282 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:39713 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 36955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/36955.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=40483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@7391f6f6 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1882869650-2585-acceptor-0@783acad3-ServerConnector@7a007c37{HTTP/1.1, (http/1.1)}{0.0.0.0:39005} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36955-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43187 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2077874689-2219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-29044841-172.31.14.131-1689664534220:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp915121724-2284 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2020849238-2325 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: jenkins-hbase4:40483Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1019364747_17 at /127.0.0.1:59756 [Receiving block BP-29044841-172.31.14.131-1689664534220:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36955-SendThread(127.0.0.1:63390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp915121724-2280 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 300634907@qtp-492901182-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: jenkins-hbase4:34145Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp915121724-2277 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/691978700.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-29044841-172.31.14.131-1689664534220 heartbeating to localhost/127.0.0.1:43393 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1071530391_17 at /127.0.0.1:59770 [Receiving block BP-29044841-172.31.14.131-1689664534220:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@6e825350 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2077874689-2216 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/691978700.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36955-SendThread(127.0.0.1:63390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-29044841-172.31.14.131-1689664534220:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2077874689-2217-acceptor-0@f10dabc-ServerConnector@7dfb63db{HTTP/1.1, (http/1.1)}{0.0.0.0:44151} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x168f4c7a sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1259606006.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689664535320 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:43393 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45091,1689664530065 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RS:0;jenkins-hbase4:43187-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 44453 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=40483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 44453 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1019364747_17 at /127.0.0.1:59712 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 44453 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-29044841-172.31.14.131-1689664534220:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-29044841-172.31.14.131-1689664534220:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x43c2f650-SendThread(127.0.0.1:63390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:40483 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1478065905-2250 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:38555 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/36955-SendThread(127.0.0.1:63390) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/36955-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:43187Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1722280313-2308-acceptor-0@1975b3bf-ServerConnector@28b8c4bb{HTTP/1.1, (http/1.1)}{0.0.0.0:44157} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 36955 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@372de80e java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=40483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins@localhost:43393 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:43393 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-29044841-172.31.14.131-1689664534220:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 40827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=40483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43187 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-29044841-172.31.14.131-1689664534220 heartbeating to localhost/127.0.0.1:43393 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x298f9a8c-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 40827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/44381-SendThread(127.0.0.1:57544) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1019364747_17 at /127.0.0.1:36524 [Receiving block BP-29044841-172.31.14.131-1689664534220:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:39713 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2063969769_17 at /127.0.0.1:59796 [Receiving block BP-29044841-172.31.14.131-1689664534220:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1478065905-2251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1722280313-2312 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 43393 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/36955.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43187 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:43393 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@2e7cdee java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1591560354) connection to localhost/127.0.0.1:39713 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp915121724-2278-acceptor-0@feb7728-ServerConnector@3e29d14d{HTTP/1.1, (http/1.1)}{0.0.0.0:42051} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34145 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 531523699@qtp-332638597-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:43393 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2063969769_17 at /127.0.0.1:32880 [Receiving block BP-29044841-172.31.14.131-1689664534220:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=45449 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57544@0x35a21aab sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1259606006.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-29044841-172.31.14.131-1689664534220:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2063969769_17 at /127.0.0.1:32788 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2063969769_17 at /127.0.0.1:36554 [Receiving block BP-29044841-172.31.14.131-1689664534220:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38555 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63390@0x33456cc8-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: 1584733569@qtp-492901182-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40561 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp2077874689-2221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=837 (was 806) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=398 (was 416), ProcessCount=172 (was 174), AvailableMemoryMB=4474 (was 2410) - AvailableMemoryMB LEAK? - 2023-07-18 07:15:36,788 WARN [Listener at localhost/36955] hbase.ResourceChecker(130): Thread=556 is superior to 500 2023-07-18 07:15:36,807 INFO [Listener at localhost/36955] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=556, OpenFileDescriptor=837, MaxFileDescriptor=60000, SystemLoadAverage=398, ProcessCount=172, AvailableMemoryMB=4473 2023-07-18 07:15:36,807 WARN [Listener at localhost/36955] hbase.ResourceChecker(130): Thread=556 is superior to 500 2023-07-18 07:15:36,807 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-18 07:15:36,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:36,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:36,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:36,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:36,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:36,812 INFO [RS:3;jenkins-hbase4:45449] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45449%2C1689664536531, suffix=, logDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/WALs/jenkins-hbase4.apache.org,45449,1689664536531, archiveDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/oldWALs, maxLogs=32 2023-07-18 07:15:36,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:36,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:36,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:36,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:36,817 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:36,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:36,822 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:36,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:36,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:36,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:36,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:36,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:36,837 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38199,DS-41b8dfa2-90aa-4f84-b2a4-952d5216a8d5,DISK] 2023-07-18 07:15:36,843 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:45913,DS-971e1925-2fd4-4c39-be24-7f9fcf0a5b47,DISK] 2023-07-18 07:15:36,843 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44193,DS-1be37284-7d24-4a11-875d-bd50b01a56c1,DISK] 2023-07-18 07:15:36,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:36,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:36,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38555] to rsgroup master 2023-07-18 07:15:36,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:36,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:35714 deadline: 1689665736850, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. 2023-07-18 07:15:36,851 INFO [RS:3;jenkins-hbase4:45449] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/WALs/jenkins-hbase4.apache.org,45449,1689664536531/jenkins-hbase4.apache.org%2C45449%2C1689664536531.1689664536813 2023-07-18 07:15:36,851 WARN [Listener at localhost/36955] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:36,852 DEBUG [RS:3;jenkins-hbase4:45449] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38199,DS-41b8dfa2-90aa-4f84-b2a4-952d5216a8d5,DISK], DatanodeInfoWithStorage[127.0.0.1:44193,DS-1be37284-7d24-4a11-875d-bd50b01a56c1,DISK], DatanodeInfoWithStorage[127.0.0.1:45913,DS-971e1925-2fd4-4c39-be24-7f9fcf0a5b47,DISK]] 2023-07-18 07:15:36,852 INFO [Listener at localhost/36955] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:36,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:36,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:36,854 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34145, jenkins-hbase4.apache.org:40483, jenkins-hbase4.apache.org:43187, jenkins-hbase4.apache.org:45449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:36,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:36,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:36,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:36,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-18 07:15:36,858 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:36,858 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-18 07:15:36,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 07:15:36,860 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:36,860 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:36,861 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:36,865 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-18 07:15:36,867 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/default/t1/de20109967fe0d19c8181d7f6384fc5f 2023-07-18 07:15:36,867 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/default/t1/de20109967fe0d19c8181d7f6384fc5f empty. 2023-07-18 07:15:36,868 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/default/t1/de20109967fe0d19c8181d7f6384fc5f 2023-07-18 07:15:36,868 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-18 07:15:36,908 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-18 07:15:36,909 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => de20109967fe0d19c8181d7f6384fc5f, NAME => 't1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp 2023-07-18 07:15:36,939 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:36,939 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing de20109967fe0d19c8181d7f6384fc5f, disabling compactions & flushes 2023-07-18 07:15:36,939 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f. 2023-07-18 07:15:36,939 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f. 2023-07-18 07:15:36,939 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f. after waiting 0 ms 2023-07-18 07:15:36,939 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f. 2023-07-18 07:15:36,939 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f. 2023-07-18 07:15:36,939 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for de20109967fe0d19c8181d7f6384fc5f: 2023-07-18 07:15:36,942 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-18 07:15:36,943 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689664536943"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664536943"}]},"ts":"1689664536943"} 2023-07-18 07:15:36,944 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-18 07:15:36,945 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-18 07:15:36,945 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664536945"}]},"ts":"1689664536945"} 2023-07-18 07:15:36,946 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-18 07:15:36,949 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-18 07:15:36,950 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-18 07:15:36,950 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-18 07:15:36,950 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-18 07:15:36,950 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-18 07:15:36,950 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-18 07:15:36,950 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=de20109967fe0d19c8181d7f6384fc5f, ASSIGN}] 2023-07-18 07:15:36,951 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=de20109967fe0d19c8181d7f6384fc5f, ASSIGN 2023-07-18 07:15:36,952 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=de20109967fe0d19c8181d7f6384fc5f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34145,1689664535113; forceNewPlan=false, retain=false 2023-07-18 07:15:36,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 07:15:37,102 INFO [jenkins-hbase4:38555] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-18 07:15:37,103 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=de20109967fe0d19c8181d7f6384fc5f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:37,104 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689664537103"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664537103"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664537103"}]},"ts":"1689664537103"} 2023-07-18 07:15:37,105 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure de20109967fe0d19c8181d7f6384fc5f, server=jenkins-hbase4.apache.org,34145,1689664535113}] 2023-07-18 07:15:37,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 07:15:37,260 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f. 2023-07-18 07:15:37,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => de20109967fe0d19c8181d7f6384fc5f, NAME => 't1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f.', STARTKEY => '', ENDKEY => ''} 2023-07-18 07:15:37,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 de20109967fe0d19c8181d7f6384fc5f 2023-07-18 07:15:37,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-18 07:15:37,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for de20109967fe0d19c8181d7f6384fc5f 2023-07-18 07:15:37,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for de20109967fe0d19c8181d7f6384fc5f 2023-07-18 07:15:37,262 INFO [StoreOpener-de20109967fe0d19c8181d7f6384fc5f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region de20109967fe0d19c8181d7f6384fc5f 2023-07-18 07:15:37,263 DEBUG [StoreOpener-de20109967fe0d19c8181d7f6384fc5f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/default/t1/de20109967fe0d19c8181d7f6384fc5f/cf1 2023-07-18 07:15:37,263 DEBUG [StoreOpener-de20109967fe0d19c8181d7f6384fc5f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/default/t1/de20109967fe0d19c8181d7f6384fc5f/cf1 2023-07-18 07:15:37,264 INFO [StoreOpener-de20109967fe0d19c8181d7f6384fc5f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region de20109967fe0d19c8181d7f6384fc5f columnFamilyName cf1 2023-07-18 07:15:37,264 INFO [StoreOpener-de20109967fe0d19c8181d7f6384fc5f-1] regionserver.HStore(310): Store=de20109967fe0d19c8181d7f6384fc5f/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-18 07:15:37,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/default/t1/de20109967fe0d19c8181d7f6384fc5f 2023-07-18 07:15:37,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/default/t1/de20109967fe0d19c8181d7f6384fc5f 2023-07-18 07:15:37,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for de20109967fe0d19c8181d7f6384fc5f 2023-07-18 07:15:37,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/default/t1/de20109967fe0d19c8181d7f6384fc5f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-18 07:15:37,270 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened de20109967fe0d19c8181d7f6384fc5f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10334151360, jitterRate=-0.03755715489387512}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-18 07:15:37,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for de20109967fe0d19c8181d7f6384fc5f: 2023-07-18 07:15:37,271 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f., pid=14, masterSystemTime=1689664537256 2023-07-18 07:15:37,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f. 2023-07-18 07:15:37,273 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f. 2023-07-18 07:15:37,273 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=de20109967fe0d19c8181d7f6384fc5f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:37,273 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689664537273"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689664537273"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689664537273"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689664537273"}]},"ts":"1689664537273"} 2023-07-18 07:15:37,276 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-18 07:15:37,276 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure de20109967fe0d19c8181d7f6384fc5f, server=jenkins-hbase4.apache.org,34145,1689664535113 in 169 msec 2023-07-18 07:15:37,277 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-18 07:15:37,277 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=de20109967fe0d19c8181d7f6384fc5f, ASSIGN in 326 msec 2023-07-18 07:15:37,278 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-18 07:15:37,278 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664537278"}]},"ts":"1689664537278"} 2023-07-18 07:15:37,279 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-18 07:15:37,281 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-18 07:15:37,282 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 425 msec 2023-07-18 07:15:37,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-18 07:15:37,463 INFO [Listener at localhost/36955] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-18 07:15:37,463 DEBUG [Listener at localhost/36955] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-18 07:15:37,463 INFO [Listener at localhost/36955] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:37,466 INFO [Listener at localhost/36955] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-18 07:15:37,466 INFO [Listener at localhost/36955] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:37,466 INFO [Listener at localhost/36955] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-18 07:15:37,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-18 07:15:37,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-18 07:15:37,470 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-18 07:15:37,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-18 07:15:37,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 352 connection: 172.31.14.131:35714 deadline: 1689664597467, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-18 07:15:37,473 INFO [Listener at localhost/36955] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:37,474 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=5 msec 2023-07-18 07:15:37,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:37,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:37,575 INFO [Listener at localhost/36955] client.HBaseAdmin$15(890): Started disable of t1 2023-07-18 07:15:37,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-18 07:15:37,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-18 07:15:37,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 07:15:37,579 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664537578"}]},"ts":"1689664537578"} 2023-07-18 07:15:37,580 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-18 07:15:37,582 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-18 07:15:37,583 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=de20109967fe0d19c8181d7f6384fc5f, UNASSIGN}] 2023-07-18 07:15:37,584 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=de20109967fe0d19c8181d7f6384fc5f, UNASSIGN 2023-07-18 07:15:37,584 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=de20109967fe0d19c8181d7f6384fc5f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:37,584 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689664537584"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689664537584"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689664537584"}]},"ts":"1689664537584"} 2023-07-18 07:15:37,585 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure de20109967fe0d19c8181d7f6384fc5f, server=jenkins-hbase4.apache.org,34145,1689664535113}] 2023-07-18 07:15:37,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 07:15:37,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close de20109967fe0d19c8181d7f6384fc5f 2023-07-18 07:15:37,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing de20109967fe0d19c8181d7f6384fc5f, disabling compactions & flushes 2023-07-18 07:15:37,738 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f. 2023-07-18 07:15:37,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f. 2023-07-18 07:15:37,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f. after waiting 0 ms 2023-07-18 07:15:37,738 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f. 2023-07-18 07:15:37,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/default/t1/de20109967fe0d19c8181d7f6384fc5f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-18 07:15:37,743 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f. 2023-07-18 07:15:37,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for de20109967fe0d19c8181d7f6384fc5f: 2023-07-18 07:15:37,745 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed de20109967fe0d19c8181d7f6384fc5f 2023-07-18 07:15:37,746 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=de20109967fe0d19c8181d7f6384fc5f, regionState=CLOSED 2023-07-18 07:15:37,746 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689664537745"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689664537745"}]},"ts":"1689664537745"} 2023-07-18 07:15:37,760 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-18 07:15:37,760 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure de20109967fe0d19c8181d7f6384fc5f, server=jenkins-hbase4.apache.org,34145,1689664535113 in 163 msec 2023-07-18 07:15:37,762 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-18 07:15:37,762 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=de20109967fe0d19c8181d7f6384fc5f, UNASSIGN in 177 msec 2023-07-18 07:15:37,763 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689664537763"}]},"ts":"1689664537763"} 2023-07-18 07:15:37,764 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-18 07:15:37,766 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-18 07:15:37,768 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 192 msec 2023-07-18 07:15:37,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-18 07:15:37,880 INFO [Listener at localhost/36955] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-18 07:15:37,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-18 07:15:37,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-18 07:15:37,884 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-18 07:15:37,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-18 07:15:37,885 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-18 07:15:37,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:37,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:37,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:37,888 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/default/t1/de20109967fe0d19c8181d7f6384fc5f 2023-07-18 07:15:37,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 07:15:37,890 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/default/t1/de20109967fe0d19c8181d7f6384fc5f/cf1, FileablePath, hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/default/t1/de20109967fe0d19c8181d7f6384fc5f/recovered.edits] 2023-07-18 07:15:37,895 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/default/t1/de20109967fe0d19c8181d7f6384fc5f/recovered.edits/4.seqid to hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/archive/data/default/t1/de20109967fe0d19c8181d7f6384fc5f/recovered.edits/4.seqid 2023-07-18 07:15:37,895 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/.tmp/data/default/t1/de20109967fe0d19c8181d7f6384fc5f 2023-07-18 07:15:37,895 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-18 07:15:37,897 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-18 07:15:37,899 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-18 07:15:37,900 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-18 07:15:37,901 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-18 07:15:37,901 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-18 07:15:37,901 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689664537901"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:37,903 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-18 07:15:37,903 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => de20109967fe0d19c8181d7f6384fc5f, NAME => 't1,,1689664536856.de20109967fe0d19c8181d7f6384fc5f.', STARTKEY => '', ENDKEY => ''}] 2023-07-18 07:15:37,903 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-18 07:15:37,903 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689664537903"}]},"ts":"9223372036854775807"} 2023-07-18 07:15:37,904 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-18 07:15:37,906 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-18 07:15:37,906 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 25 msec 2023-07-18 07:15:37,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-18 07:15:37,990 INFO [Listener at localhost/36955] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-18 07:15:37,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:37,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:37,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:37,995 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:37,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:37,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:37,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:37,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:38,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:38,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:38,009 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:38,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:38,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:38,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:38,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:38,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38555] to rsgroup master 2023-07-18 07:15:38,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:38,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:35714 deadline: 1689665738018, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. 2023-07-18 07:15:38,019 WARN [Listener at localhost/36955] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:38,022 INFO [Listener at localhost/36955] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:38,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,023 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,023 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34145, jenkins-hbase4.apache.org:40483, jenkins-hbase4.apache.org:43187, jenkins-hbase4.apache.org:45449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:38,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:38,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:38,044 INFO [Listener at localhost/36955] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=568 (was 556) - Thread LEAK? -, OpenFileDescriptor=830 (was 837), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=374 (was 398), ProcessCount=172 (was 172), AvailableMemoryMB=4461 (was 4473) 2023-07-18 07:15:38,044 WARN [Listener at localhost/36955] hbase.ResourceChecker(130): Thread=568 is superior to 500 2023-07-18 07:15:38,062 INFO [Listener at localhost/36955] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=568, OpenFileDescriptor=830, MaxFileDescriptor=60000, SystemLoadAverage=374, ProcessCount=172, AvailableMemoryMB=4460 2023-07-18 07:15:38,062 WARN [Listener at localhost/36955] hbase.ResourceChecker(130): Thread=568 is superior to 500 2023-07-18 07:15:38,062 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-18 07:15:38,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:38,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:38,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:38,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:38,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:38,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:38,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:38,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:38,076 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:38,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:38,078 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,078 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:38,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:38,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:38,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38555] to rsgroup master 2023-07-18 07:15:38,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:38,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35714 deadline: 1689665738084, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. 2023-07-18 07:15:38,085 WARN [Listener at localhost/36955] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:38,087 INFO [Listener at localhost/36955] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:38,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,088 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34145, jenkins-hbase4.apache.org:40483, jenkins-hbase4.apache.org:43187, jenkins-hbase4.apache.org:45449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:38,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:38,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:38,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-18 07:15:38,090 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:38,091 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-18 07:15:38,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-18 07:15:38,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-18 07:15:38,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:38,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:38,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:38,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:38,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:38,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:38,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:38,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:38,115 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:38,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:38,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:38,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:38,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:38,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38555] to rsgroup master 2023-07-18 07:15:38,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:38,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35714 deadline: 1689665738127, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. 2023-07-18 07:15:38,128 WARN [Listener at localhost/36955] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:38,129 INFO [Listener at localhost/36955] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:38,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,130 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34145, jenkins-hbase4.apache.org:40483, jenkins-hbase4.apache.org:43187, jenkins-hbase4.apache.org:45449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:38,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:38,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:38,148 INFO [Listener at localhost/36955] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=569 (was 568) - Thread LEAK? -, OpenFileDescriptor=829 (was 830), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=374 (was 374), ProcessCount=172 (was 172), AvailableMemoryMB=4461 (was 4460) - AvailableMemoryMB LEAK? - 2023-07-18 07:15:38,148 WARN [Listener at localhost/36955] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-18 07:15:38,168 INFO [Listener at localhost/36955] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=569, OpenFileDescriptor=829, MaxFileDescriptor=60000, SystemLoadAverage=374, ProcessCount=172, AvailableMemoryMB=4460 2023-07-18 07:15:38,168 WARN [Listener at localhost/36955] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-18 07:15:38,168 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-18 07:15:38,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:38,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:38,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:38,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:38,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:38,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:38,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:38,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:38,181 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:38,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:38,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:38,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:38,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:38,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38555] to rsgroup master 2023-07-18 07:15:38,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:38,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35714 deadline: 1689665738191, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. 2023-07-18 07:15:38,192 WARN [Listener at localhost/36955] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:38,193 INFO [Listener at localhost/36955] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:38,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,194 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,194 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34145, jenkins-hbase4.apache.org:40483, jenkins-hbase4.apache.org:43187, jenkins-hbase4.apache.org:45449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:38,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:38,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:38,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:38,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:38,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:38,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:38,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:38,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:38,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:38,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:38,211 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:38,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:38,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:38,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:38,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:38,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38555] to rsgroup master 2023-07-18 07:15:38,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:38,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35714 deadline: 1689665738220, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. 2023-07-18 07:15:38,221 WARN [Listener at localhost/36955] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:38,222 INFO [Listener at localhost/36955] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:38,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,224 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34145, jenkins-hbase4.apache.org:40483, jenkins-hbase4.apache.org:43187, jenkins-hbase4.apache.org:45449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:38,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:38,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:38,245 INFO [Listener at localhost/36955] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=570 (was 569) - Thread LEAK? -, OpenFileDescriptor=829 (was 829), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=374 (was 374), ProcessCount=172 (was 172), AvailableMemoryMB=4457 (was 4460) 2023-07-18 07:15:38,245 WARN [Listener at localhost/36955] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-18 07:15:38,262 INFO [Listener at localhost/36955] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=570, OpenFileDescriptor=828, MaxFileDescriptor=60000, SystemLoadAverage=374, ProcessCount=172, AvailableMemoryMB=4457 2023-07-18 07:15:38,263 WARN [Listener at localhost/36955] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-18 07:15:38,263 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-18 07:15:38,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,266 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:38,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:38,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:38,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:38,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:38,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:38,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:38,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:38,276 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:38,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:38,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:38,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:38,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:38,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38555] to rsgroup master 2023-07-18 07:15:38,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:38,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35714 deadline: 1689665738284, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. 2023-07-18 07:15:38,285 WARN [Listener at localhost/36955] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:38,286 INFO [Listener at localhost/36955] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:38,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,287 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34145, jenkins-hbase4.apache.org:40483, jenkins-hbase4.apache.org:43187, jenkins-hbase4.apache.org:45449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:38,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:38,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:38,288 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-18 07:15:38,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-18 07:15:38,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-18 07:15:38,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:38,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-18 07:15:38,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:38,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-18 07:15:38,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-18 07:15:38,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 07:15:38,304 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 07:15:38,307 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-18 07:15:38,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-18 07:15:38,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-18 07:15:38,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:38,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:35714 deadline: 1689665738402, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-18 07:15:38,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-18 07:15:38,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-18 07:15:38,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 07:15:38,425 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-18 07:15:38,426 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 16 msec 2023-07-18 07:15:38,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-18 07:15:38,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-18 07:15:38,526 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-18 07:15:38,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-18 07:15:38,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:38,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-18 07:15:38,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:38,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-18 07:15:38,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 07:15:38,539 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 07:15:38,541 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 07:15:38,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-18 07:15:38,542 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 07:15:38,543 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-18 07:15:38,543 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-18 07:15:38,544 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 07:15:38,545 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-18 07:15:38,546 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-18 07:15:38,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-18 07:15:38,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-18 07:15:38,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-18 07:15:38,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:38,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-18 07:15:38,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:38,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:38,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:35714 deadline: 1689664598652, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-18 07:15:38,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:38,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:38,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:38,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:38,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:38,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-18 07:15:38,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:38,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-18 07:15:38,663 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:38,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-18 07:15:38,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-18 07:15:38,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-18 07:15:38,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-18 07:15:38,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-18 07:15:38,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-18 07:15:38,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-18 07:15:38,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-18 07:15:38,671 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-18 07:15:38,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-18 07:15:38,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-18 07:15:38,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-18 07:15:38,675 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-18 07:15:38,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-18 07:15:38,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38555] to rsgroup master 2023-07-18 07:15:38,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-18 07:15:38,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35714 deadline: 1689665738680, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. 2023-07-18 07:15:38,681 WARN [Listener at localhost/36955] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor60.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38555 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-18 07:15:38,683 INFO [Listener at localhost/36955] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-18 07:15:38,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-18 07:15:38,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-18 07:15:38,684 INFO [Listener at localhost/36955] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34145, jenkins-hbase4.apache.org:40483, jenkins-hbase4.apache.org:43187, jenkins-hbase4.apache.org:45449], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-18 07:15:38,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-18 07:15:38,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38555] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-18 07:15:38,704 INFO [Listener at localhost/36955] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=567 (was 570), OpenFileDescriptor=826 (was 828), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=374 (was 374), ProcessCount=172 (was 172), AvailableMemoryMB=4455 (was 4457) 2023-07-18 07:15:38,704 WARN [Listener at localhost/36955] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-18 07:15:38,704 INFO [Listener at localhost/36955] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-18 07:15:38,704 INFO [Listener at localhost/36955] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-18 07:15:38,704 DEBUG [Listener at localhost/36955] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x168f4c7a to 127.0.0.1:63390 2023-07-18 07:15:38,704 DEBUG [Listener at localhost/36955] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:38,704 DEBUG [Listener at localhost/36955] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-18 07:15:38,704 DEBUG [Listener at localhost/36955] util.JVMClusterUtil(257): Found active master hash=2137150986, stopped=false 2023-07-18 07:15:38,705 DEBUG [Listener at localhost/36955] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-18 07:15:38,705 DEBUG [Listener at localhost/36955] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-18 07:15:38,705 INFO [Listener at localhost/36955] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,38555,1689664534957 2023-07-18 07:15:38,708 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:45449-0x10177493c9e000b, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:38,708 INFO [Listener at localhost/36955] procedure2.ProcedureExecutor(629): Stopping 2023-07-18 07:15:38,708 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:38,708 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:38,708 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:38,708 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:38,708 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-18 07:15:38,708 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45449-0x10177493c9e000b, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:38,708 DEBUG [Listener at localhost/36955] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x43c2f650 to 127.0.0.1:63390 2023-07-18 07:15:38,709 DEBUG [Listener at localhost/36955] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:38,709 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:38,709 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:38,709 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:38,709 INFO [Listener at localhost/36955] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43187,1689664535022' ***** 2023-07-18 07:15:38,709 INFO [Listener at localhost/36955] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 07:15:38,709 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-18 07:15:38,709 INFO [Listener at localhost/36955] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40483,1689664535069' ***** 2023-07-18 07:15:38,709 INFO [Listener at localhost/36955] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 07:15:38,709 INFO [Listener at localhost/36955] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34145,1689664535113' ***** 2023-07-18 07:15:38,709 INFO [Listener at localhost/36955] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 07:15:38,709 INFO [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 07:15:38,709 INFO [Listener at localhost/36955] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45449,1689664536531' ***** 2023-07-18 07:15:38,709 INFO [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 07:15:38,709 INFO [RS:1;jenkins-hbase4:40483] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 07:15:38,711 INFO [Listener at localhost/36955] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-18 07:15:38,713 INFO [RS:3;jenkins-hbase4:45449] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 07:15:38,715 INFO [RS:0;jenkins-hbase4:43187] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@59d67bc5{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:38,716 INFO [RS:1;jenkins-hbase4:40483] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6cbf8af{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:38,716 INFO [RS:2;jenkins-hbase4:34145] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2a31ff28{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:38,717 INFO [RS:3;jenkins-hbase4:45449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@396047b6{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-18 07:15:38,717 INFO [RS:0;jenkins-hbase4:43187] server.AbstractConnector(383): Stopped ServerConnector@4e213774{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:38,717 INFO [RS:1;jenkins-hbase4:40483] server.AbstractConnector(383): Stopped ServerConnector@3e29d14d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:38,717 INFO [RS:0;jenkins-hbase4:43187] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 07:15:38,717 INFO [RS:3;jenkins-hbase4:45449] server.AbstractConnector(383): Stopped ServerConnector@7a007c37{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:38,717 INFO [RS:1;jenkins-hbase4:40483] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 07:15:38,717 INFO [RS:2;jenkins-hbase4:34145] server.AbstractConnector(383): Stopped ServerConnector@28b8c4bb{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:38,718 INFO [RS:0;jenkins-hbase4:43187] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3d1f03cf{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 07:15:38,718 INFO [RS:3;jenkins-hbase4:45449] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 07:15:38,719 INFO [RS:2;jenkins-hbase4:34145] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 07:15:38,719 INFO [RS:1;jenkins-hbase4:40483] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1e4c685d{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 07:15:38,721 INFO [RS:2;jenkins-hbase4:34145] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5c8bb404{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 07:15:38,720 INFO [RS:3;jenkins-hbase4:45449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1bb8b1db{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 07:15:38,723 INFO [RS:2;jenkins-hbase4:34145] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3d7ca1bf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/hadoop.log.dir/,STOPPED} 2023-07-18 07:15:38,724 INFO [RS:3;jenkins-hbase4:45449] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7b0a60fa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/hadoop.log.dir/,STOPPED} 2023-07-18 07:15:38,720 INFO [RS:0;jenkins-hbase4:43187] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1e653533{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/hadoop.log.dir/,STOPPED} 2023-07-18 07:15:38,722 INFO [RS:1;jenkins-hbase4:40483] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@115f03f4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/hadoop.log.dir/,STOPPED} 2023-07-18 07:15:38,724 INFO [RS:2;jenkins-hbase4:34145] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 07:15:38,724 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 07:15:38,725 INFO [RS:2;jenkins-hbase4:34145] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 07:15:38,725 INFO [RS:2;jenkins-hbase4:34145] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 07:15:38,725 INFO [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer(3305): Received CLOSE for a9ebb423e48347789f3922db24f9672a 2023-07-18 07:15:38,725 INFO [RS:0;jenkins-hbase4:43187] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 07:15:38,725 INFO [RS:1;jenkins-hbase4:40483] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 07:15:38,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a9ebb423e48347789f3922db24f9672a, disabling compactions & flushes 2023-07-18 07:15:38,725 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 07:15:38,725 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a. 2023-07-18 07:15:38,725 INFO [RS:3;jenkins-hbase4:45449] regionserver.HeapMemoryManager(220): Stopping 2023-07-18 07:15:38,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a. 2023-07-18 07:15:38,725 INFO [RS:3;jenkins-hbase4:45449] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 07:15:38,725 INFO [RS:1;jenkins-hbase4:40483] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 07:15:38,725 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 07:15:38,725 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-18 07:15:38,725 INFO [RS:0;jenkins-hbase4:43187] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-18 07:15:38,725 INFO [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:38,725 INFO [RS:0;jenkins-hbase4:43187] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 07:15:38,725 INFO [RS:1;jenkins-hbase4:40483] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 07:15:38,725 INFO [RS:3;jenkins-hbase4:45449] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-18 07:15:38,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a. after waiting 0 ms 2023-07-18 07:15:38,726 INFO [RS:3;jenkins-hbase4:45449] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45449,1689664536531 2023-07-18 07:15:38,726 INFO [RS:1;jenkins-hbase4:40483] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:38,726 DEBUG [RS:3;jenkins-hbase4:45449] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x15086123 to 127.0.0.1:63390 2023-07-18 07:15:38,726 INFO [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(3305): Received CLOSE for a9b68dc82f735620d2836da347a5b8c3 2023-07-18 07:15:38,726 DEBUG [RS:2;jenkins-hbase4:34145] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x33456cc8 to 127.0.0.1:63390 2023-07-18 07:15:38,726 DEBUG [RS:3;jenkins-hbase4:45449] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:38,726 DEBUG [RS:1;jenkins-hbase4:40483] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x21bb2893 to 127.0.0.1:63390 2023-07-18 07:15:38,726 DEBUG [RS:1;jenkins-hbase4:40483] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:38,726 INFO [RS:1;jenkins-hbase4:40483] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40483,1689664535069; all regions closed. 2023-07-18 07:15:38,726 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a. 2023-07-18 07:15:38,726 INFO [RS:3;jenkins-hbase4:45449] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45449,1689664536531; all regions closed. 2023-07-18 07:15:38,726 DEBUG [RS:2;jenkins-hbase4:34145] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:38,726 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing a9ebb423e48347789f3922db24f9672a 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-18 07:15:38,727 INFO [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-18 07:15:38,727 DEBUG [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer(1478): Online Regions={a9ebb423e48347789f3922db24f9672a=hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a.} 2023-07-18 07:15:38,727 DEBUG [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer(1504): Waiting on a9ebb423e48347789f3922db24f9672a 2023-07-18 07:15:38,727 INFO [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:38,727 DEBUG [RS:0;jenkins-hbase4:43187] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x48128c77 to 127.0.0.1:63390 2023-07-18 07:15:38,727 DEBUG [RS:0;jenkins-hbase4:43187] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:38,727 INFO [RS:0;jenkins-hbase4:43187] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 07:15:38,728 INFO [RS:0;jenkins-hbase4:43187] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 07:15:38,728 INFO [RS:0;jenkins-hbase4:43187] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 07:15:38,728 INFO [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-18 07:15:38,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a9b68dc82f735620d2836da347a5b8c3, disabling compactions & flushes 2023-07-18 07:15:38,728 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3. 2023-07-18 07:15:38,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3. 2023-07-18 07:15:38,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3. after waiting 0 ms 2023-07-18 07:15:38,728 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3. 2023-07-18 07:15:38,728 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing a9b68dc82f735620d2836da347a5b8c3 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-18 07:15:38,731 INFO [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-18 07:15:38,731 DEBUG [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, a9b68dc82f735620d2836da347a5b8c3=hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3.} 2023-07-18 07:15:38,731 DEBUG [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(1504): Waiting on 1588230740, a9b68dc82f735620d2836da347a5b8c3 2023-07-18 07:15:38,731 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-18 07:15:38,731 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-18 07:15:38,731 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-18 07:15:38,731 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-18 07:15:38,731 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-18 07:15:38,731 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-18 07:15:38,745 DEBUG [RS:1;jenkins-hbase4:40483] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/oldWALs 2023-07-18 07:15:38,745 INFO [RS:1;jenkins-hbase4:40483] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40483%2C1689664535069:(num 1689664535600) 2023-07-18 07:15:38,745 DEBUG [RS:1;jenkins-hbase4:40483] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:38,745 INFO [RS:1;jenkins-hbase4:40483] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:38,747 INFO [RS:1;jenkins-hbase4:40483] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-18 07:15:38,747 INFO [RS:1;jenkins-hbase4:40483] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 07:15:38,747 INFO [RS:1;jenkins-hbase4:40483] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 07:15:38,747 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 07:15:38,747 INFO [RS:1;jenkins-hbase4:40483] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 07:15:38,748 INFO [RS:1;jenkins-hbase4:40483] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40483 2023-07-18 07:15:38,750 DEBUG [RS:3;jenkins-hbase4:45449] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/oldWALs 2023-07-18 07:15:38,750 INFO [RS:3;jenkins-hbase4:45449] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45449%2C1689664536531:(num 1689664536813) 2023-07-18 07:15:38,750 DEBUG [RS:3;jenkins-hbase4:45449] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:38,750 INFO [RS:3;jenkins-hbase4:45449] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:38,754 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:38,755 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:38,755 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:45449-0x10177493c9e000b, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:38,756 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:45449-0x10177493c9e000b, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:38,755 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:38,756 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:38,756 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:38,756 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:38,756 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:38,756 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40483,1689664535069 2023-07-18 07:15:38,756 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:38,756 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40483,1689664535069] 2023-07-18 07:15:38,756 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40483,1689664535069; numProcessing=1 2023-07-18 07:15:38,757 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40483,1689664535069 already deleted, retry=false 2023-07-18 07:15:38,758 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40483,1689664535069 expired; onlineServers=3 2023-07-18 07:15:38,758 INFO [RS:3;jenkins-hbase4:45449] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 07:15:38,759 INFO [RS:3;jenkins-hbase4:45449] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 07:15:38,759 INFO [RS:3;jenkins-hbase4:45449] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 07:15:38,759 INFO [RS:3;jenkins-hbase4:45449] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 07:15:38,759 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 07:15:38,763 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:38,768 INFO [RS:3;jenkins-hbase4:45449] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45449 2023-07-18 07:15:38,770 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:45449-0x10177493c9e000b, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45449,1689664536531 2023-07-18 07:15:38,770 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45449,1689664536531 2023-07-18 07:15:38,770 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:38,770 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45449,1689664536531 2023-07-18 07:15:38,770 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45449,1689664536531] 2023-07-18 07:15:38,770 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45449,1689664536531; numProcessing=2 2023-07-18 07:15:38,771 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 07:15:38,771 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-18 07:15:38,772 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45449,1689664536531 already deleted, retry=false 2023-07-18 07:15:38,772 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45449,1689664536531 expired; onlineServers=2 2023-07-18 07:15:38,775 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/namespace/a9ebb423e48347789f3922db24f9672a/.tmp/info/5a474794f2ac4c0685fe11d48b0c5a3c 2023-07-18 07:15:38,776 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:38,781 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5a474794f2ac4c0685fe11d48b0c5a3c 2023-07-18 07:15:38,782 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/namespace/a9ebb423e48347789f3922db24f9672a/.tmp/info/5a474794f2ac4c0685fe11d48b0c5a3c as hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/namespace/a9ebb423e48347789f3922db24f9672a/info/5a474794f2ac4c0685fe11d48b0c5a3c 2023-07-18 07:15:38,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5a474794f2ac4c0685fe11d48b0c5a3c 2023-07-18 07:15:38,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/namespace/a9ebb423e48347789f3922db24f9672a/info/5a474794f2ac4c0685fe11d48b0c5a3c, entries=3, sequenceid=9, filesize=5.0 K 2023-07-18 07:15:38,795 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for a9ebb423e48347789f3922db24f9672a in 69ms, sequenceid=9, compaction requested=false 2023-07-18 07:15:38,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/namespace/a9ebb423e48347789f3922db24f9672a/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-18 07:15:38,805 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a. 2023-07-18 07:15:38,805 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a9ebb423e48347789f3922db24f9672a: 2023-07-18 07:15:38,805 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689664535806.a9ebb423e48347789f3922db24f9672a. 2023-07-18 07:15:38,907 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:45449-0x10177493c9e000b, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:38,907 INFO [RS:3;jenkins-hbase4:45449] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45449,1689664536531; zookeeper connection closed. 2023-07-18 07:15:38,907 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:45449-0x10177493c9e000b, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:38,908 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@55ff97bb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@55ff97bb 2023-07-18 07:15:38,927 INFO [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34145,1689664535113; all regions closed. 2023-07-18 07:15:38,931 DEBUG [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(1504): Waiting on 1588230740, a9b68dc82f735620d2836da347a5b8c3 2023-07-18 07:15:38,932 DEBUG [RS:2;jenkins-hbase4:34145] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/oldWALs 2023-07-18 07:15:38,932 INFO [RS:2;jenkins-hbase4:34145] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34145%2C1689664535113:(num 1689664535599) 2023-07-18 07:15:38,932 DEBUG [RS:2;jenkins-hbase4:34145] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:38,932 INFO [RS:2;jenkins-hbase4:34145] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:38,932 INFO [RS:2;jenkins-hbase4:34145] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 07:15:38,932 INFO [RS:2;jenkins-hbase4:34145] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-18 07:15:38,932 INFO [RS:2;jenkins-hbase4:34145] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-18 07:15:38,933 INFO [RS:2;jenkins-hbase4:34145] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-18 07:15:38,932 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 07:15:38,934 INFO [RS:2;jenkins-hbase4:34145] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34145 2023-07-18 07:15:38,937 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:38,937 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:38,937 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34145,1689664535113 2023-07-18 07:15:38,938 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34145,1689664535113] 2023-07-18 07:15:38,938 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34145,1689664535113; numProcessing=3 2023-07-18 07:15:38,939 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34145,1689664535113 already deleted, retry=false 2023-07-18 07:15:38,939 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34145,1689664535113 expired; onlineServers=1 2023-07-18 07:15:39,008 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:39,008 INFO [RS:1;jenkins-hbase4:40483] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40483,1689664535069; zookeeper connection closed. 2023-07-18 07:15:39,008 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:40483-0x10177493c9e0002, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:39,008 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@78784e96] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@78784e96 2023-07-18 07:15:39,131 DEBUG [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(1504): Waiting on 1588230740, a9b68dc82f735620d2836da347a5b8c3 2023-07-18 07:15:39,189 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/.tmp/info/3c4e6630747a40deafdc47fc11cae07e 2023-07-18 07:15:39,189 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/rsgroup/a9b68dc82f735620d2836da347a5b8c3/.tmp/m/9ddc6019c2834cdfa1ab84461ff05d8b 2023-07-18 07:15:39,196 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9ddc6019c2834cdfa1ab84461ff05d8b 2023-07-18 07:15:39,197 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3c4e6630747a40deafdc47fc11cae07e 2023-07-18 07:15:39,197 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/rsgroup/a9b68dc82f735620d2836da347a5b8c3/.tmp/m/9ddc6019c2834cdfa1ab84461ff05d8b as hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/rsgroup/a9b68dc82f735620d2836da347a5b8c3/m/9ddc6019c2834cdfa1ab84461ff05d8b 2023-07-18 07:15:39,202 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9ddc6019c2834cdfa1ab84461ff05d8b 2023-07-18 07:15:39,202 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/rsgroup/a9b68dc82f735620d2836da347a5b8c3/m/9ddc6019c2834cdfa1ab84461ff05d8b, entries=12, sequenceid=29, filesize=5.4 K 2023-07-18 07:15:39,203 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for a9b68dc82f735620d2836da347a5b8c3 in 475ms, sequenceid=29, compaction requested=false 2023-07-18 07:15:39,211 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/rsgroup/a9b68dc82f735620d2836da347a5b8c3/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-18 07:15:39,211 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/.tmp/rep_barrier/c66ca4f9a5784063b4b27417737784e4 2023-07-18 07:15:39,211 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 07:15:39,212 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3. 2023-07-18 07:15:39,212 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a9b68dc82f735620d2836da347a5b8c3: 2023-07-18 07:15:39,212 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689664535950.a9b68dc82f735620d2836da347a5b8c3. 2023-07-18 07:15:39,216 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c66ca4f9a5784063b4b27417737784e4 2023-07-18 07:15:39,225 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/.tmp/table/6c1dcc85112a4ae1835a2c88d6a813ae 2023-07-18 07:15:39,230 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6c1dcc85112a4ae1835a2c88d6a813ae 2023-07-18 07:15:39,231 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/.tmp/info/3c4e6630747a40deafdc47fc11cae07e as hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/info/3c4e6630747a40deafdc47fc11cae07e 2023-07-18 07:15:39,236 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3c4e6630747a40deafdc47fc11cae07e 2023-07-18 07:15:39,236 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/info/3c4e6630747a40deafdc47fc11cae07e, entries=22, sequenceid=26, filesize=7.3 K 2023-07-18 07:15:39,237 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/.tmp/rep_barrier/c66ca4f9a5784063b4b27417737784e4 as hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/rep_barrier/c66ca4f9a5784063b4b27417737784e4 2023-07-18 07:15:39,242 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c66ca4f9a5784063b4b27417737784e4 2023-07-18 07:15:39,242 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/rep_barrier/c66ca4f9a5784063b4b27417737784e4, entries=1, sequenceid=26, filesize=4.9 K 2023-07-18 07:15:39,243 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/.tmp/table/6c1dcc85112a4ae1835a2c88d6a813ae as hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/table/6c1dcc85112a4ae1835a2c88d6a813ae 2023-07-18 07:15:39,247 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6c1dcc85112a4ae1835a2c88d6a813ae 2023-07-18 07:15:39,248 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/table/6c1dcc85112a4ae1835a2c88d6a813ae, entries=6, sequenceid=26, filesize=5.1 K 2023-07-18 07:15:39,248 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 517ms, sequenceid=26, compaction requested=false 2023-07-18 07:15:39,256 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-18 07:15:39,256 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-18 07:15:39,257 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-18 07:15:39,257 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-18 07:15:39,257 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-18 07:15:39,331 INFO [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43187,1689664535022; all regions closed. 2023-07-18 07:15:39,336 DEBUG [RS:0;jenkins-hbase4:43187] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/oldWALs 2023-07-18 07:15:39,337 INFO [RS:0;jenkins-hbase4:43187] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43187%2C1689664535022.meta:.meta(num 1689664535748) 2023-07-18 07:15:39,342 DEBUG [RS:0;jenkins-hbase4:43187] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/oldWALs 2023-07-18 07:15:39,342 INFO [RS:0;jenkins-hbase4:43187] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43187%2C1689664535022:(num 1689664535602) 2023-07-18 07:15:39,342 DEBUG [RS:0;jenkins-hbase4:43187] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:39,342 INFO [RS:0;jenkins-hbase4:43187] regionserver.LeaseManager(133): Closed leases 2023-07-18 07:15:39,342 INFO [RS:0;jenkins-hbase4:43187] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-18 07:15:39,342 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 07:15:39,343 INFO [RS:0;jenkins-hbase4:43187] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43187 2023-07-18 07:15:39,345 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-18 07:15:39,345 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43187,1689664535022 2023-07-18 07:15:39,346 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43187,1689664535022] 2023-07-18 07:15:39,346 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43187,1689664535022; numProcessing=4 2023-07-18 07:15:39,348 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43187,1689664535022 already deleted, retry=false 2023-07-18 07:15:39,348 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43187,1689664535022 expired; onlineServers=0 2023-07-18 07:15:39,348 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38555,1689664534957' ***** 2023-07-18 07:15:39,348 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-18 07:15:39,349 DEBUG [M:0;jenkins-hbase4:38555] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b5ddd3e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-18 07:15:39,349 INFO [M:0;jenkins-hbase4:38555] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-18 07:15:39,351 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-18 07:15:39,351 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-18 07:15:39,351 INFO [M:0;jenkins-hbase4:38555] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1cb67a7f{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-18 07:15:39,351 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-18 07:15:39,352 INFO [M:0;jenkins-hbase4:38555] server.AbstractConnector(383): Stopped ServerConnector@7dfb63db{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:39,352 INFO [M:0;jenkins-hbase4:38555] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-18 07:15:39,352 INFO [M:0;jenkins-hbase4:38555] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4ae9f274{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-18 07:15:39,353 INFO [M:0;jenkins-hbase4:38555] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@8c04e09{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/hadoop.log.dir/,STOPPED} 2023-07-18 07:15:39,353 INFO [M:0;jenkins-hbase4:38555] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38555,1689664534957 2023-07-18 07:15:39,353 INFO [M:0;jenkins-hbase4:38555] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38555,1689664534957; all regions closed. 2023-07-18 07:15:39,353 DEBUG [M:0;jenkins-hbase4:38555] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-18 07:15:39,353 INFO [M:0;jenkins-hbase4:38555] master.HMaster(1491): Stopping master jetty server 2023-07-18 07:15:39,354 INFO [M:0;jenkins-hbase4:38555] server.AbstractConnector(383): Stopped ServerConnector@76b45728{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-18 07:15:39,354 DEBUG [M:0;jenkins-hbase4:38555] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-18 07:15:39,355 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-18 07:15:39,355 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689664535320] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689664535320,5,FailOnTimeoutGroup] 2023-07-18 07:15:39,355 DEBUG [M:0;jenkins-hbase4:38555] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-18 07:15:39,355 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689664535321] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689664535321,5,FailOnTimeoutGroup] 2023-07-18 07:15:39,355 INFO [M:0;jenkins-hbase4:38555] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-18 07:15:39,355 INFO [M:0;jenkins-hbase4:38555] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-18 07:15:39,355 INFO [M:0;jenkins-hbase4:38555] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-18 07:15:39,355 DEBUG [M:0;jenkins-hbase4:38555] master.HMaster(1512): Stopping service threads 2023-07-18 07:15:39,355 INFO [M:0;jenkins-hbase4:38555] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-18 07:15:39,355 ERROR [M:0;jenkins-hbase4:38555] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-18 07:15:39,355 INFO [M:0;jenkins-hbase4:38555] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-18 07:15:39,355 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-18 07:15:39,356 DEBUG [M:0;jenkins-hbase4:38555] zookeeper.ZKUtil(398): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-18 07:15:39,356 WARN [M:0;jenkins-hbase4:38555] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-18 07:15:39,356 INFO [M:0;jenkins-hbase4:38555] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-18 07:15:39,356 INFO [M:0;jenkins-hbase4:38555] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-18 07:15:39,356 DEBUG [M:0;jenkins-hbase4:38555] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-18 07:15:39,356 INFO [M:0;jenkins-hbase4:38555] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:39,356 DEBUG [M:0;jenkins-hbase4:38555] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:39,356 DEBUG [M:0;jenkins-hbase4:38555] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-18 07:15:39,356 DEBUG [M:0;jenkins-hbase4:38555] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:39,356 INFO [M:0;jenkins-hbase4:38555] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.17 KB heapSize=90.62 KB 2023-07-18 07:15:39,369 INFO [M:0;jenkins-hbase4:38555] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.17 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/937e3fc4cc794b9c9862a81bbd60947f 2023-07-18 07:15:39,375 DEBUG [M:0;jenkins-hbase4:38555] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/937e3fc4cc794b9c9862a81bbd60947f as hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/937e3fc4cc794b9c9862a81bbd60947f 2023-07-18 07:15:39,380 INFO [M:0;jenkins-hbase4:38555] regionserver.HStore(1080): Added hdfs://localhost:43393/user/jenkins/test-data/c3d4e3d9-8509-8616-aafd-ce27ab88274d/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/937e3fc4cc794b9c9862a81bbd60947f, entries=22, sequenceid=175, filesize=11.1 K 2023-07-18 07:15:39,381 INFO [M:0;jenkins-hbase4:38555] regionserver.HRegion(2948): Finished flush of dataSize ~76.17 KB/77993, heapSize ~90.60 KB/92776, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 25ms, sequenceid=175, compaction requested=false 2023-07-18 07:15:39,383 INFO [M:0;jenkins-hbase4:38555] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-18 07:15:39,383 DEBUG [M:0;jenkins-hbase4:38555] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-18 07:15:39,386 INFO [M:0;jenkins-hbase4:38555] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-18 07:15:39,386 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-18 07:15:39,386 INFO [M:0;jenkins-hbase4:38555] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38555 2023-07-18 07:15:39,388 DEBUG [M:0;jenkins-hbase4:38555] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,38555,1689664534957 already deleted, retry=false 2023-07-18 07:15:39,609 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:39,609 INFO [M:0;jenkins-hbase4:38555] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38555,1689664534957; zookeeper connection closed. 2023-07-18 07:15:39,609 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): master:38555-0x10177493c9e0000, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:39,709 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:39,709 INFO [RS:0;jenkins-hbase4:43187] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43187,1689664535022; zookeeper connection closed. 2023-07-18 07:15:39,709 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:43187-0x10177493c9e0001, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:39,710 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@751af427] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@751af427 2023-07-18 07:15:39,810 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:39,810 INFO [RS:2;jenkins-hbase4:34145] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34145,1689664535113; zookeeper connection closed. 2023-07-18 07:15:39,810 DEBUG [Listener at localhost/36955-EventThread] zookeeper.ZKWatcher(600): regionserver:34145-0x10177493c9e0003, quorum=127.0.0.1:63390, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-18 07:15:39,810 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7a58ad38] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7a58ad38 2023-07-18 07:15:39,810 INFO [Listener at localhost/36955] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-18 07:15:39,810 WARN [Listener at localhost/36955] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 07:15:39,814 INFO [Listener at localhost/36955] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 07:15:39,918 WARN [BP-29044841-172.31.14.131-1689664534220 heartbeating to localhost/127.0.0.1:43393] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 07:15:39,918 WARN [BP-29044841-172.31.14.131-1689664534220 heartbeating to localhost/127.0.0.1:43393] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-29044841-172.31.14.131-1689664534220 (Datanode Uuid 680555c7-f1d3-4e78-a16f-265f6a6c804b) service to localhost/127.0.0.1:43393 2023-07-18 07:15:39,919 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data5/current/BP-29044841-172.31.14.131-1689664534220] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:39,919 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data6/current/BP-29044841-172.31.14.131-1689664534220] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:39,921 WARN [Listener at localhost/36955] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 07:15:39,924 INFO [Listener at localhost/36955] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 07:15:40,027 WARN [BP-29044841-172.31.14.131-1689664534220 heartbeating to localhost/127.0.0.1:43393] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 07:15:40,027 WARN [BP-29044841-172.31.14.131-1689664534220 heartbeating to localhost/127.0.0.1:43393] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-29044841-172.31.14.131-1689664534220 (Datanode Uuid bacb708d-4601-42fb-b23d-abc95459762c) service to localhost/127.0.0.1:43393 2023-07-18 07:15:40,028 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data3/current/BP-29044841-172.31.14.131-1689664534220] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:40,028 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data4/current/BP-29044841-172.31.14.131-1689664534220] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:40,029 WARN [Listener at localhost/36955] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-18 07:15:40,036 INFO [Listener at localhost/36955] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 07:15:40,140 WARN [BP-29044841-172.31.14.131-1689664534220 heartbeating to localhost/127.0.0.1:43393] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-18 07:15:40,140 WARN [BP-29044841-172.31.14.131-1689664534220 heartbeating to localhost/127.0.0.1:43393] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-29044841-172.31.14.131-1689664534220 (Datanode Uuid c14261ca-5184-4d4b-b2c9-7863ce20aa30) service to localhost/127.0.0.1:43393 2023-07-18 07:15:40,141 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data1/current/BP-29044841-172.31.14.131-1689664534220] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:40,141 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/eea770cb-a07c-d149-460c-b86a7c95b9e5/cluster_2efc0d65-02bf-78b8-861f-1f4549bc57c3/dfs/data/data2/current/BP-29044841-172.31.14.131-1689664534220] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-18 07:15:40,153 INFO [Listener at localhost/36955] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-18 07:15:40,272 INFO [Listener at localhost/36955] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-18 07:15:40,299 INFO [Listener at localhost/36955] hbase.HBaseTestingUtility(1293): Minicluster is down